Test Report: Docker_Linux_containerd_arm64 20602

                    
                      a90248a4a931d52b681e38138304d5427e54b74a:2025-04-07:39037
                    
                

Test fail (1/331)

Order failed test Duration
310 TestStartStop/group/old-k8s-version/serial/SecondStart 377.66
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (377.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-856421 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0407 13:24:46.523885  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/functional-062962/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:25:07.904020  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/addons-596243/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:26:43.454970  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/functional-062962/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-856421 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m13.185073752s)

                                                
                                                
-- stdout --
	* [old-k8s-version-856421] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20602
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20602-873072/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-873072/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-856421" primary control-plane node in "old-k8s-version-856421" cluster
	* Pulling base image v0.0.46-1743675393-20591 ...
	* Restarting existing docker container for "old-k8s-version-856421" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.27 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-856421 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 13:24:46.161004 1095137 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:24:46.161135 1095137 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:24:46.161146 1095137 out.go:358] Setting ErrFile to fd 2...
	I0407 13:24:46.161153 1095137 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:24:46.161415 1095137 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-873072/.minikube/bin
	I0407 13:24:46.161854 1095137 out.go:352] Setting JSON to false
	I0407 13:24:46.162709 1095137 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":18431,"bootTime":1744013856,"procs":164,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0407 13:24:46.162786 1095137 start.go:139] virtualization:  
	I0407 13:24:46.167498 1095137 out.go:177] * [old-k8s-version-856421] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0407 13:24:46.170397 1095137 out.go:177]   - MINIKUBE_LOCATION=20602
	I0407 13:24:46.170578 1095137 notify.go:220] Checking for updates...
	I0407 13:24:46.176287 1095137 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 13:24:46.179194 1095137 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20602-873072/kubeconfig
	I0407 13:24:46.182066 1095137 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-873072/.minikube
	I0407 13:24:46.185208 1095137 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0407 13:24:46.188047 1095137 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 13:24:46.191363 1095137 config.go:182] Loaded profile config "old-k8s-version-856421": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0407 13:24:46.194842 1095137 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	I0407 13:24:46.197625 1095137 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 13:24:46.238916 1095137 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0407 13:24:46.239035 1095137 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 13:24:46.330987 1095137 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-04-07 13:24:46.320710994 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Serv
erErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0407 13:24:46.331098 1095137 docker.go:318] overlay module found
	I0407 13:24:46.334200 1095137 out.go:177] * Using the docker driver based on existing profile
	I0407 13:24:46.336980 1095137 start.go:297] selected driver: docker
	I0407 13:24:46.336998 1095137 start.go:901] validating driver "docker" against &{Name:old-k8s-version-856421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-856421 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:24:46.337100 1095137 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 13:24:46.337853 1095137 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 13:24:46.425336 1095137 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-04-07 13:24:46.416250605 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Serv
erErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0407 13:24:46.425679 1095137 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 13:24:46.425734 1095137 cni.go:84] Creating CNI manager for ""
	I0407 13:24:46.425788 1095137 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0407 13:24:46.425834 1095137 start.go:340] cluster config:
	{Name:old-k8s-version-856421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-856421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:24:46.429205 1095137 out.go:177] * Starting "old-k8s-version-856421" primary control-plane node in "old-k8s-version-856421" cluster
	I0407 13:24:46.432139 1095137 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0407 13:24:46.435156 1095137 out.go:177] * Pulling base image v0.0.46-1743675393-20591 ...
	I0407 13:24:46.437896 1095137 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0407 13:24:46.437939 1095137 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local docker daemon
	I0407 13:24:46.437948 1095137 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20602-873072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0407 13:24:46.437972 1095137 cache.go:56] Caching tarball of preloaded images
	I0407 13:24:46.438068 1095137 preload.go:172] Found /home/jenkins/minikube-integration/20602-873072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0407 13:24:46.438077 1095137 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0407 13:24:46.438185 1095137 profile.go:143] Saving config to /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/old-k8s-version-856421/config.json ...
	I0407 13:24:46.458084 1095137 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local docker daemon, skipping pull
	I0407 13:24:46.458109 1095137 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 exists in daemon, skipping load
	I0407 13:24:46.458128 1095137 cache.go:230] Successfully downloaded all kic artifacts
	I0407 13:24:46.458160 1095137 start.go:360] acquireMachinesLock for old-k8s-version-856421: {Name:mka794a348148701ceb7e35cf711bf1e3c93119a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 13:24:46.458222 1095137 start.go:364] duration metric: took 35.98µs to acquireMachinesLock for "old-k8s-version-856421"
	I0407 13:24:46.458246 1095137 start.go:96] Skipping create...Using existing machine configuration
	I0407 13:24:46.458256 1095137 fix.go:54] fixHost starting: 
	I0407 13:24:46.458508 1095137 cli_runner.go:164] Run: docker container inspect old-k8s-version-856421 --format={{.State.Status}}
	I0407 13:24:46.476373 1095137 fix.go:112] recreateIfNeeded on old-k8s-version-856421: state=Stopped err=<nil>
	W0407 13:24:46.476406 1095137 fix.go:138] unexpected machine state, will restart: <nil>
	I0407 13:24:46.479631 1095137 out.go:177] * Restarting existing docker container for "old-k8s-version-856421" ...
	I0407 13:24:46.482729 1095137 cli_runner.go:164] Run: docker start old-k8s-version-856421
	I0407 13:24:46.826482 1095137 cli_runner.go:164] Run: docker container inspect old-k8s-version-856421 --format={{.State.Status}}
	I0407 13:24:46.852734 1095137 kic.go:430] container "old-k8s-version-856421" state is running.
	I0407 13:24:46.853109 1095137 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-856421
	I0407 13:24:46.878530 1095137 profile.go:143] Saving config to /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/old-k8s-version-856421/config.json ...
	I0407 13:24:46.878839 1095137 machine.go:93] provisionDockerMachine start ...
	I0407 13:24:46.878924 1095137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-856421
	I0407 13:24:46.931809 1095137 main.go:141] libmachine: Using SSH client type: native
	I0407 13:24:46.932146 1095137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 34180 <nil> <nil>}
	I0407 13:24:46.932169 1095137 main.go:141] libmachine: About to run SSH command:
	hostname
	I0407 13:24:46.932749 1095137 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37958->127.0.0.1:34180: read: connection reset by peer
	I0407 13:24:50.077668 1095137 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-856421
	
	I0407 13:24:50.077776 1095137 ubuntu.go:169] provisioning hostname "old-k8s-version-856421"
	I0407 13:24:50.077892 1095137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-856421
	I0407 13:24:50.115066 1095137 main.go:141] libmachine: Using SSH client type: native
	I0407 13:24:50.115420 1095137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 34180 <nil> <nil>}
	I0407 13:24:50.115433 1095137 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-856421 && echo "old-k8s-version-856421" | sudo tee /etc/hostname
	I0407 13:24:50.270350 1095137 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-856421
	
	I0407 13:24:50.270449 1095137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-856421
	I0407 13:24:50.295660 1095137 main.go:141] libmachine: Using SSH client type: native
	I0407 13:24:50.295976 1095137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 34180 <nil> <nil>}
	I0407 13:24:50.295993 1095137 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-856421' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-856421/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-856421' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 13:24:50.442119 1095137 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 13:24:50.442146 1095137 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20602-873072/.minikube CaCertPath:/home/jenkins/minikube-integration/20602-873072/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20602-873072/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20602-873072/.minikube}
	I0407 13:24:50.442165 1095137 ubuntu.go:177] setting up certificates
	I0407 13:24:50.442175 1095137 provision.go:84] configureAuth start
	I0407 13:24:50.442249 1095137 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-856421
	I0407 13:24:50.470388 1095137 provision.go:143] copyHostCerts
	I0407 13:24:50.470452 1095137 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-873072/.minikube/ca.pem, removing ...
	I0407 13:24:50.470467 1095137 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-873072/.minikube/ca.pem
	I0407 13:24:50.470540 1095137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-873072/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20602-873072/.minikube/ca.pem (1078 bytes)
	I0407 13:24:50.470643 1095137 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-873072/.minikube/cert.pem, removing ...
	I0407 13:24:50.470648 1095137 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-873072/.minikube/cert.pem
	I0407 13:24:50.470676 1095137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-873072/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20602-873072/.minikube/cert.pem (1123 bytes)
	I0407 13:24:50.470730 1095137 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-873072/.minikube/key.pem, removing ...
	I0407 13:24:50.470735 1095137 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-873072/.minikube/key.pem
	I0407 13:24:50.470759 1095137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-873072/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20602-873072/.minikube/key.pem (1675 bytes)
	I0407 13:24:50.470816 1095137 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20602-873072/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20602-873072/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20602-873072/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-856421 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-856421]
	I0407 13:24:51.028545 1095137 provision.go:177] copyRemoteCerts
	I0407 13:24:51.028624 1095137 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 13:24:51.028672 1095137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-856421
	I0407 13:24:51.063797 1095137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34180 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/old-k8s-version-856421/id_rsa Username:docker}
	I0407 13:24:51.162602 1095137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 13:24:51.208565 1095137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0407 13:24:51.257185 1095137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0407 13:24:51.304039 1095137 provision.go:87] duration metric: took 861.849756ms to configureAuth
	I0407 13:24:51.304074 1095137 ubuntu.go:193] setting minikube options for container-runtime
	I0407 13:24:51.304290 1095137 config.go:182] Loaded profile config "old-k8s-version-856421": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0407 13:24:51.304305 1095137 machine.go:96] duration metric: took 4.425450441s to provisionDockerMachine
	I0407 13:24:51.304313 1095137 start.go:293] postStartSetup for "old-k8s-version-856421" (driver="docker")
	I0407 13:24:51.304329 1095137 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 13:24:51.304389 1095137 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 13:24:51.304432 1095137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-856421
	I0407 13:24:51.335510 1095137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34180 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/old-k8s-version-856421/id_rsa Username:docker}
	I0407 13:24:51.447227 1095137 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 13:24:51.450895 1095137 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0407 13:24:51.450941 1095137 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0407 13:24:51.450952 1095137 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0407 13:24:51.450960 1095137 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0407 13:24:51.450975 1095137 filesync.go:126] Scanning /home/jenkins/minikube-integration/20602-873072/.minikube/addons for local assets ...
	I0407 13:24:51.451037 1095137 filesync.go:126] Scanning /home/jenkins/minikube-integration/20602-873072/.minikube/files for local assets ...
	I0407 13:24:51.451115 1095137 filesync.go:149] local asset: /home/jenkins/minikube-integration/20602-873072/.minikube/files/etc/ssl/certs/8785942.pem -> 8785942.pem in /etc/ssl/certs
	I0407 13:24:51.451218 1095137 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0407 13:24:51.460239 1095137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/files/etc/ssl/certs/8785942.pem --> /etc/ssl/certs/8785942.pem (1708 bytes)
	I0407 13:24:51.503517 1095137 start.go:296] duration metric: took 199.182857ms for postStartSetup
	I0407 13:24:51.503669 1095137 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 13:24:51.503767 1095137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-856421
	I0407 13:24:51.547336 1095137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34180 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/old-k8s-version-856421/id_rsa Username:docker}
	I0407 13:24:51.650357 1095137 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0407 13:24:51.656394 1095137 fix.go:56] duration metric: took 5.198130306s for fixHost
	I0407 13:24:51.656426 1095137 start.go:83] releasing machines lock for "old-k8s-version-856421", held for 5.198183172s
	I0407 13:24:51.656501 1095137 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-856421
	I0407 13:24:51.676357 1095137 ssh_runner.go:195] Run: cat /version.json
	I0407 13:24:51.676406 1095137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-856421
	I0407 13:24:51.676652 1095137 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0407 13:24:51.676704 1095137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-856421
	I0407 13:24:51.713680 1095137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34180 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/old-k8s-version-856421/id_rsa Username:docker}
	I0407 13:24:51.717916 1095137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34180 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/old-k8s-version-856421/id_rsa Username:docker}
	I0407 13:24:51.833459 1095137 ssh_runner.go:195] Run: systemctl --version
	I0407 13:24:51.973539 1095137 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0407 13:24:51.977913 1095137 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0407 13:24:52.015324 1095137 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0407 13:24:52.015404 1095137 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 13:24:52.028173 1095137 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0407 13:24:52.028196 1095137 start.go:495] detecting cgroup driver to use...
	I0407 13:24:52.028231 1095137 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0407 13:24:52.028281 1095137 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0407 13:24:52.046816 1095137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0407 13:24:52.066195 1095137 docker.go:217] disabling cri-docker service (if available) ...
	I0407 13:24:52.066320 1095137 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0407 13:24:52.086366 1095137 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0407 13:24:52.103082 1095137 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0407 13:24:52.234689 1095137 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0407 13:24:52.376217 1095137 docker.go:233] disabling docker service ...
	I0407 13:24:52.376335 1095137 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0407 13:24:52.392912 1095137 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0407 13:24:52.406141 1095137 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0407 13:24:52.553265 1095137 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0407 13:24:52.701543 1095137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0407 13:24:52.719714 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 13:24:52.740464 1095137 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0407 13:24:52.761072 1095137 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0407 13:24:52.774185 1095137 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0407 13:24:52.774315 1095137 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0407 13:24:52.786703 1095137 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0407 13:24:52.797781 1095137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0407 13:24:52.812284 1095137 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0407 13:24:52.822016 1095137 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 13:24:52.839288 1095137 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0407 13:24:52.848947 1095137 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 13:24:52.859599 1095137 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 13:24:52.874303 1095137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:24:53.032277 1095137 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0407 13:24:53.377175 1095137 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0407 13:24:53.377305 1095137 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0407 13:24:53.386159 1095137 start.go:563] Will wait 60s for crictl version
	I0407 13:24:53.386284 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:24:53.396526 1095137 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0407 13:24:53.464817 1095137 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0407 13:24:53.464939 1095137 ssh_runner.go:195] Run: containerd --version
	I0407 13:24:53.507756 1095137 ssh_runner.go:195] Run: containerd --version
	I0407 13:24:53.545938 1095137 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.27 ...
	I0407 13:24:53.549123 1095137 cli_runner.go:164] Run: docker network inspect old-k8s-version-856421 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0407 13:24:53.583640 1095137 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0407 13:24:53.587772 1095137 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:24:53.608178 1095137 kubeadm.go:883] updating cluster {Name:old-k8s-version-856421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-856421 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0407 13:24:53.608290 1095137 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0407 13:24:53.608346 1095137 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 13:24:53.693203 1095137 containerd.go:627] all images are preloaded for containerd runtime.
	I0407 13:24:53.693223 1095137 containerd.go:534] Images already preloaded, skipping extraction
	I0407 13:24:53.693287 1095137 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 13:24:53.757891 1095137 containerd.go:627] all images are preloaded for containerd runtime.
	I0407 13:24:53.757913 1095137 cache_images.go:84] Images are preloaded, skipping loading
	I0407 13:24:53.757922 1095137 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
	I0407 13:24:53.758058 1095137 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-856421 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-856421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0407 13:24:53.758128 1095137 ssh_runner.go:195] Run: sudo crictl info
	I0407 13:24:53.827432 1095137 cni.go:84] Creating CNI manager for ""
	I0407 13:24:53.827514 1095137 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0407 13:24:53.827539 1095137 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0407 13:24:53.827598 1095137 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-856421 NodeName:old-k8s-version-856421 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0407 13:24:53.827779 1095137 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-856421"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0407 13:24:53.827899 1095137 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0407 13:24:53.840463 1095137 binaries.go:44] Found k8s binaries, skipping transfer
	I0407 13:24:53.840616 1095137 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0407 13:24:53.852232 1095137 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0407 13:24:53.880832 1095137 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 13:24:53.922101 1095137 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0407 13:24:53.948500 1095137 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0407 13:24:53.952579 1095137 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:24:53.972855 1095137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:24:54.124760 1095137 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 13:24:54.149747 1095137 certs.go:68] Setting up /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/old-k8s-version-856421 for IP: 192.168.76.2
	I0407 13:24:54.149840 1095137 certs.go:194] generating shared ca certs ...
	I0407 13:24:54.149871 1095137 certs.go:226] acquiring lock for ca certs: {Name:mk03094d90434f2a42c24ebaddfee021594c5911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:24:54.150093 1095137 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20602-873072/.minikube/ca.key
	I0407 13:24:54.150186 1095137 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20602-873072/.minikube/proxy-client-ca.key
	I0407 13:24:54.150213 1095137 certs.go:256] generating profile certs ...
	I0407 13:24:54.150356 1095137 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/old-k8s-version-856421/client.key
	I0407 13:24:54.150477 1095137 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/old-k8s-version-856421/apiserver.key.67e5f325
	I0407 13:24:54.150562 1095137 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/old-k8s-version-856421/proxy-client.key
	I0407 13:24:54.150727 1095137 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-873072/.minikube/certs/878594.pem (1338 bytes)
	W0407 13:24:54.150788 1095137 certs.go:480] ignoring /home/jenkins/minikube-integration/20602-873072/.minikube/certs/878594_empty.pem, impossibly tiny 0 bytes
	I0407 13:24:54.150818 1095137 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-873072/.minikube/certs/ca-key.pem (1675 bytes)
	I0407 13:24:54.150872 1095137 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-873072/.minikube/certs/ca.pem (1078 bytes)
	I0407 13:24:54.150932 1095137 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-873072/.minikube/certs/cert.pem (1123 bytes)
	I0407 13:24:54.150987 1095137 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-873072/.minikube/certs/key.pem (1675 bytes)
	I0407 13:24:54.151069 1095137 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-873072/.minikube/files/etc/ssl/certs/8785942.pem (1708 bytes)
	I0407 13:24:54.151937 1095137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 13:24:54.234245 1095137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0407 13:24:54.301656 1095137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 13:24:54.387687 1095137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0407 13:24:54.442999 1095137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/old-k8s-version-856421/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0407 13:24:54.475508 1095137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/old-k8s-version-856421/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0407 13:24:54.522235 1095137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/old-k8s-version-856421/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 13:24:54.575107 1095137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/old-k8s-version-856421/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0407 13:24:54.619521 1095137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 13:24:54.675081 1095137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/certs/878594.pem --> /usr/share/ca-certificates/878594.pem (1338 bytes)
	I0407 13:24:54.721551 1095137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/files/etc/ssl/certs/8785942.pem --> /usr/share/ca-certificates/8785942.pem (1708 bytes)
	I0407 13:24:54.748788 1095137 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0407 13:24:54.769117 1095137 ssh_runner.go:195] Run: openssl version
	I0407 13:24:54.775557 1095137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/878594.pem && ln -fs /usr/share/ca-certificates/878594.pem /etc/ssl/certs/878594.pem"
	I0407 13:24:54.787148 1095137 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/878594.pem
	I0407 13:24:54.791245 1095137 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  7 12:44 /usr/share/ca-certificates/878594.pem
	I0407 13:24:54.791358 1095137 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/878594.pem
	I0407 13:24:54.799240 1095137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/878594.pem /etc/ssl/certs/51391683.0"
	I0407 13:24:54.809136 1095137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8785942.pem && ln -fs /usr/share/ca-certificates/8785942.pem /etc/ssl/certs/8785942.pem"
	I0407 13:24:54.823430 1095137 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8785942.pem
	I0407 13:24:54.830333 1095137 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  7 12:44 /usr/share/ca-certificates/8785942.pem
	I0407 13:24:54.830479 1095137 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8785942.pem
	I0407 13:24:54.837690 1095137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8785942.pem /etc/ssl/certs/3ec20f2e.0"
	I0407 13:24:54.852382 1095137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 13:24:54.870786 1095137 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:24:54.874782 1095137 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:37 /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:24:54.874898 1095137 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:24:54.883034 1095137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 13:24:54.896267 1095137 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 13:24:54.900575 1095137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0407 13:24:54.914263 1095137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0407 13:24:54.921345 1095137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0407 13:24:54.938656 1095137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0407 13:24:54.954012 1095137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0407 13:24:54.970637 1095137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0407 13:24:54.977897 1095137 kubeadm.go:392] StartCluster: {Name:old-k8s-version-856421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-856421 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:24:54.978065 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0407 13:24:54.978165 1095137 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0407 13:24:55.051587 1095137 cri.go:89] found id: "e8606d211bc70838dc98631de3d337f0689089d54c4dd0963c66fed159c72bce"
	I0407 13:24:55.051673 1095137 cri.go:89] found id: "b2ada56c528ba0083dc91efb8ea35b5b75cc614fa96a6cd93819b45a72ee05fb"
	I0407 13:24:55.051695 1095137 cri.go:89] found id: "1e215f6f3ad8e0bd3b6e794eeed7be2edfdd8c13538897b791d2e8e1db120357"
	I0407 13:24:55.051717 1095137 cri.go:89] found id: "77f2619b2d1aad99f71633fed74393b3b19f570432847d24bea319ce3f3cc54b"
	I0407 13:24:55.051751 1095137 cri.go:89] found id: "ad3658b16b26420a5c8d95f062de8e38e1c9d8d5235a77bf2a50e7228aabb735"
	I0407 13:24:55.051776 1095137 cri.go:89] found id: "c6f6d481b0f4c36b1a380741fe1a1dc69435fb844001d3a69feaa1fddcdd4030"
	I0407 13:24:55.051798 1095137 cri.go:89] found id: "d89f223f22b8606c12ec0e778fa21d2abe90349e214332d4a1fa32b45f514d9b"
	I0407 13:24:55.051831 1095137 cri.go:89] found id: "2b349ae2ec417a9d000e61e8c66afdcccb2d202d992eec33826df20e410f87a2"
	I0407 13:24:55.051851 1095137 cri.go:89] found id: ""
	I0407 13:24:55.051942 1095137 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0407 13:24:55.068842 1095137 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-04-07T13:24:55Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0407 13:24:55.068990 1095137 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0407 13:24:55.078773 1095137 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0407 13:24:55.078847 1095137 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0407 13:24:55.078933 1095137 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0407 13:24:55.091929 1095137 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0407 13:24:55.092678 1095137 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-856421" does not appear in /home/jenkins/minikube-integration/20602-873072/kubeconfig
	I0407 13:24:55.093027 1095137 kubeconfig.go:62] /home/jenkins/minikube-integration/20602-873072/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-856421" cluster setting kubeconfig missing "old-k8s-version-856421" context setting]
	I0407 13:24:55.093689 1095137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-873072/kubeconfig: {Name:mk9de2da01a51fd73232a20700f86bdc259a91ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:24:55.095732 1095137 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0407 13:24:55.115393 1095137 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I0407 13:24:55.115503 1095137 kubeadm.go:597] duration metric: took 36.611741ms to restartPrimaryControlPlane
	I0407 13:24:55.115546 1095137 kubeadm.go:394] duration metric: took 137.658086ms to StartCluster
	I0407 13:24:55.115580 1095137 settings.go:142] acquiring lock: {Name:mk3e960f3698515246acbd5cb37ff276e0a43a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:24:55.115675 1095137 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20602-873072/kubeconfig
	I0407 13:24:55.116753 1095137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-873072/kubeconfig: {Name:mk9de2da01a51fd73232a20700f86bdc259a91ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:24:55.117076 1095137 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0407 13:24:55.117611 1095137 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0407 13:24:55.117757 1095137 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-856421"
	I0407 13:24:55.117775 1095137 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-856421"
	W0407 13:24:55.117782 1095137 addons.go:247] addon storage-provisioner should already be in state true
	I0407 13:24:55.117810 1095137 host.go:66] Checking if "old-k8s-version-856421" exists ...
	I0407 13:24:55.118623 1095137 cli_runner.go:164] Run: docker container inspect old-k8s-version-856421 --format={{.State.Status}}
	I0407 13:24:55.119146 1095137 config.go:182] Loaded profile config "old-k8s-version-856421": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0407 13:24:55.119272 1095137 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-856421"
	I0407 13:24:55.119287 1095137 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-856421"
	I0407 13:24:55.119614 1095137 cli_runner.go:164] Run: docker container inspect old-k8s-version-856421 --format={{.State.Status}}
	I0407 13:24:55.123141 1095137 addons.go:69] Setting dashboard=true in profile "old-k8s-version-856421"
	I0407 13:24:55.123180 1095137 addons.go:238] Setting addon dashboard=true in "old-k8s-version-856421"
	W0407 13:24:55.123189 1095137 addons.go:247] addon dashboard should already be in state true
	I0407 13:24:55.123229 1095137 host.go:66] Checking if "old-k8s-version-856421" exists ...
	I0407 13:24:55.123809 1095137 cli_runner.go:164] Run: docker container inspect old-k8s-version-856421 --format={{.State.Status}}
	I0407 13:24:55.132606 1095137 out.go:177] * Verifying Kubernetes components...
	I0407 13:24:55.141616 1095137 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-856421"
	I0407 13:24:55.141665 1095137 addons.go:238] Setting addon metrics-server=true in "old-k8s-version-856421"
	W0407 13:24:55.141686 1095137 addons.go:247] addon metrics-server should already be in state true
	I0407 13:24:55.141763 1095137 host.go:66] Checking if "old-k8s-version-856421" exists ...
	I0407 13:24:55.149971 1095137 cli_runner.go:164] Run: docker container inspect old-k8s-version-856421 --format={{.State.Status}}
	I0407 13:24:55.170943 1095137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:24:55.198462 1095137 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 13:24:55.201616 1095137 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-856421"
	W0407 13:24:55.201636 1095137 addons.go:247] addon default-storageclass should already be in state true
	I0407 13:24:55.201661 1095137 host.go:66] Checking if "old-k8s-version-856421" exists ...
	I0407 13:24:55.202163 1095137 cli_runner.go:164] Run: docker container inspect old-k8s-version-856421 --format={{.State.Status}}
	I0407 13:24:55.202436 1095137 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 13:24:55.202451 1095137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0407 13:24:55.202501 1095137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-856421
	I0407 13:24:55.213246 1095137 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0407 13:24:55.216428 1095137 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0407 13:24:55.221796 1095137 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0407 13:24:55.221827 1095137 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0407 13:24:55.221912 1095137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-856421
	I0407 13:24:55.231666 1095137 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0407 13:24:55.235673 1095137 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0407 13:24:55.235699 1095137 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0407 13:24:55.235767 1095137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-856421
	I0407 13:24:55.286462 1095137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34180 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/old-k8s-version-856421/id_rsa Username:docker}
	I0407 13:24:55.290799 1095137 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0407 13:24:55.290818 1095137 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0407 13:24:55.290878 1095137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-856421
	I0407 13:24:55.291302 1095137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34180 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/old-k8s-version-856421/id_rsa Username:docker}
	I0407 13:24:55.291216 1095137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34180 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/old-k8s-version-856421/id_rsa Username:docker}
	I0407 13:24:55.324008 1095137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34180 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/old-k8s-version-856421/id_rsa Username:docker}
	I0407 13:24:55.419836 1095137 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 13:24:55.459453 1095137 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-856421" to be "Ready" ...
	I0407 13:24:55.557056 1095137 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0407 13:24:55.557082 1095137 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0407 13:24:55.564156 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 13:24:55.603736 1095137 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0407 13:24:55.603814 1095137 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0407 13:24:55.629628 1095137 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0407 13:24:55.629718 1095137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0407 13:24:55.696378 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0407 13:24:55.699461 1095137 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0407 13:24:55.699535 1095137 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0407 13:24:55.702672 1095137 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0407 13:24:55.702742 1095137 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0407 13:24:55.750272 1095137 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0407 13:24:55.750345 1095137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0407 13:24:55.817885 1095137 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0407 13:24:55.817972 1095137 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0407 13:24:55.845934 1095137 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0407 13:24:55.846017 1095137 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0407 13:24:55.905120 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0407 13:24:55.954939 1095137 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0407 13:24:55.955018 1095137 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0407 13:24:55.992305 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:24:55.992354 1095137 retry.go:31] will retry after 132.758073ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:24:56.078285 1095137 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0407 13:24:56.078325 1095137 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0407 13:24:56.125629 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0407 13:24:56.170815 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:24:56.170911 1095137 retry.go:31] will retry after 227.201927ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0407 13:24:56.197616 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:24:56.197692 1095137 retry.go:31] will retry after 311.814515ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:24:56.212175 1095137 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0407 13:24:56.212245 1095137 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0407 13:24:56.277359 1095137 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0407 13:24:56.277440 1095137 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0407 13:24:56.336334 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0407 13:24:56.343584 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:24:56.343684 1095137 retry.go:31] will retry after 274.037386ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:24:56.398936 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0407 13:24:56.509870 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0407 13:24:56.557803 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:24:56.557838 1095137 retry.go:31] will retry after 291.088396ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0407 13:24:56.615527 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:24:56.615561 1095137 retry.go:31] will retry after 227.116627ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:24:56.618960 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0407 13:24:56.717986 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:24:56.718041 1095137 retry.go:31] will retry after 272.338008ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0407 13:24:56.805934 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:24:56.805977 1095137 retry.go:31] will retry after 816.114206ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:24:56.843237 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0407 13:24:56.849626 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0407 13:24:56.990572 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0407 13:24:57.048486 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:24:57.048520 1095137 retry.go:31] will retry after 817.098811ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0407 13:24:57.062400 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:24:57.062436 1095137 retry.go:31] will retry after 459.979601ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0407 13:24:57.184233 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:24:57.184264 1095137 retry.go:31] will retry after 561.461539ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:24:57.460884 1095137 node_ready.go:53] error getting node "old-k8s-version-856421": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-856421": dial tcp 192.168.76.2:8443: connect: connection refused
	I0407 13:24:57.523205 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0407 13:24:57.622505 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0407 13:24:57.666888 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:24:57.666922 1095137 retry.go:31] will retry after 603.30577ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:24:57.746155 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0407 13:24:57.784022 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:24:57.784051 1095137 retry.go:31] will retry after 609.881854ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:24:57.866392 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0407 13:24:57.901461 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:24:57.901494 1095137 retry.go:31] will retry after 489.132058ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0407 13:24:58.045921 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:24:58.045954 1095137 retry.go:31] will retry after 1.187060245s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:24:58.271310 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0407 13:24:58.391771 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0407 13:24:58.394102 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0407 13:24:58.455299 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:24:58.455333 1095137 retry.go:31] will retry after 468.170275ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0407 13:24:58.671682 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:24:58.671714 1095137 retry.go:31] will retry after 748.328959ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0407 13:24:58.692920 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:24:58.692953 1095137 retry.go:31] will retry after 1.446493979s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:24:58.924449 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0407 13:24:59.071228 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:24:59.071270 1095137 retry.go:31] will retry after 1.395417222s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:24:59.233659 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0407 13:24:59.371726 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:24:59.371764 1095137 retry.go:31] will retry after 679.891518ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:24:59.421077 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0407 13:24:59.571821 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:24:59.571853 1095137 retry.go:31] will retry after 2.379875632s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:24:59.960761 1095137 node_ready.go:53] error getting node "old-k8s-version-856421": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-856421": dial tcp 192.168.76.2:8443: connect: connection refused
	I0407 13:25:00.052093 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0407 13:25:00.139830 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0407 13:25:00.304885 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:25:00.304935 1095137 retry.go:31] will retry after 2.255398456s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0407 13:25:00.356868 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:25:00.356902 1095137 retry.go:31] will retry after 2.777099262s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:25:00.467884 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0407 13:25:00.613682 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:25:00.613742 1095137 retry.go:31] will retry after 1.437947407s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:25:01.952542 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 13:25:02.052292 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0407 13:25:02.053320 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:25:02.053353 1095137 retry.go:31] will retry after 1.988995677s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0407 13:25:02.152406 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:25:02.152443 1095137 retry.go:31] will retry after 3.212708422s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:25:02.460172 1095137 node_ready.go:53] error getting node "old-k8s-version-856421": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-856421": dial tcp 192.168.76.2:8443: connect: connection refused
	I0407 13:25:02.560458 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0407 13:25:02.650796 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:25:02.650827 1095137 retry.go:31] will retry after 2.59528773s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:25:03.134512 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0407 13:25:03.225880 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:25:03.225913 1095137 retry.go:31] will retry after 1.815071135s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:25:04.043222 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 13:25:05.041939 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0407 13:25:05.247084 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0407 13:25:05.365836 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0407 13:25:14.339410 1095137 node_ready.go:49] node "old-k8s-version-856421" has status "Ready":"True"
	I0407 13:25:14.339430 1095137 node_ready.go:38] duration metric: took 18.879889633s for node "old-k8s-version-856421" to be "Ready" ...
	I0407 13:25:14.339439 1095137 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 13:25:14.517005 1095137 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-gtrrb" in "kube-system" namespace to be "Ready" ...
	I0407 13:25:14.555802 1095137 pod_ready.go:93] pod "coredns-74ff55c5b-gtrrb" in "kube-system" namespace has status "Ready":"True"
	I0407 13:25:14.555884 1095137 pod_ready.go:82] duration metric: took 38.849882ms for pod "coredns-74ff55c5b-gtrrb" in "kube-system" namespace to be "Ready" ...
	I0407 13:25:14.555910 1095137 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-856421" in "kube-system" namespace to be "Ready" ...
	I0407 13:25:15.856467 1095137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.813209189s)
	I0407 13:25:15.856562 1095137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.814599286s)
	I0407 13:25:15.856580 1095137 addons.go:479] Verifying addon metrics-server=true in "old-k8s-version-856421"
	I0407 13:25:15.856613 1095137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (10.609508359s)
	I0407 13:25:15.930003 1095137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.564122108s)
	I0407 13:25:15.933293 1095137 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-856421 addons enable metrics-server
	
	I0407 13:25:15.936296 1095137 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0407 13:25:15.939099 1095137 addons.go:514] duration metric: took 20.821488345s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I0407 13:25:16.564190 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
	I0407 13:25:19.061462 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
	I0407 13:25:21.561159 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
	I0407 13:25:24.136004 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
	I0407 13:25:26.561246 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
	I0407 13:25:28.562608 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
	I0407 13:25:31.066030 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
	I0407 13:25:33.562442 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
	I0407 13:25:35.563149 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
	I0407 13:25:38.062532 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
	I0407 13:25:40.062712 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
	I0407 13:25:42.561848 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
	I0407 13:25:45.073485 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
	I0407 13:25:47.561302 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
	I0407 13:25:49.566750 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
	I0407 13:25:52.062390 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
	I0407 13:25:54.562235 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
	I0407 13:25:57.061737 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
	I0407 13:25:59.561403 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
	I0407 13:26:01.562652 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
	I0407 13:26:04.061829 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
	I0407 13:26:06.062380 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
	I0407 13:26:08.561652 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
	I0407 13:26:10.567855 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
	I0407 13:26:13.061107 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
	I0407 13:26:15.062703 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
	I0407 13:26:17.562161 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
	I0407 13:26:20.062292 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
	I0407 13:26:22.562573 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
	I0407 13:26:24.600696 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
	I0407 13:26:27.062921 1095137 pod_ready.go:93] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"True"
	I0407 13:26:27.062962 1095137 pod_ready.go:82] duration metric: took 1m12.507027795s for pod "etcd-old-k8s-version-856421" in "kube-system" namespace to be "Ready" ...
	I0407 13:26:27.062980 1095137 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-856421" in "kube-system" namespace to be "Ready" ...
	I0407 13:26:27.068010 1095137 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"True"
	I0407 13:26:27.068037 1095137 pod_ready.go:82] duration metric: took 5.04964ms for pod "kube-apiserver-old-k8s-version-856421" in "kube-system" namespace to be "Ready" ...
	I0407 13:26:27.068051 1095137 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-856421" in "kube-system" namespace to be "Ready" ...
	I0407 13:26:29.073533 1095137 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
	I0407 13:26:31.073993 1095137 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
	I0407 13:26:33.574731 1095137 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
	I0407 13:26:35.074653 1095137 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"True"
	I0407 13:26:35.074681 1095137 pod_ready.go:82] duration metric: took 8.006621835s for pod "kube-controller-manager-old-k8s-version-856421" in "kube-system" namespace to be "Ready" ...
	I0407 13:26:35.074695 1095137 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-j5fsn" in "kube-system" namespace to be "Ready" ...
	I0407 13:26:35.081073 1095137 pod_ready.go:93] pod "kube-proxy-j5fsn" in "kube-system" namespace has status "Ready":"True"
	I0407 13:26:35.081099 1095137 pod_ready.go:82] duration metric: took 6.395638ms for pod "kube-proxy-j5fsn" in "kube-system" namespace to be "Ready" ...
	I0407 13:26:35.081112 1095137 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-856421" in "kube-system" namespace to be "Ready" ...
	I0407 13:26:35.086507 1095137 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"True"
	I0407 13:26:35.086539 1095137 pod_ready.go:82] duration metric: took 5.419271ms for pod "kube-scheduler-old-k8s-version-856421" in "kube-system" namespace to be "Ready" ...
	I0407 13:26:35.086551 1095137 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace to be "Ready" ...
	I0407 13:26:37.092752 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:26:39.592070 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:26:41.592146 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:26:43.592330 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:26:45.592393 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:26:47.592493 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:26:49.592720 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:26:52.092514 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:26:54.591788 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:26:56.592197 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:26:58.592770 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:27:00.593086 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:27:02.593281 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:27:04.593492 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:27:06.594455 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:27:09.092036 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:27:11.092074 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:27:13.594376 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:27:16.091807 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:27:18.092934 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:27:20.592561 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:27:22.592787 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:27:25.093408 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:27:27.593290 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:27:30.096676 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:27:32.592208 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:27:34.592875 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:27:37.091999 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:27:39.092356 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:27:41.593206 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:27:43.594362 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:27:46.092634 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:27:48.591768 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:27:50.592248 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:27:52.592293 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:27:55.092777 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:27:57.092834 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:27:59.093062 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:28:01.592797 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:28:03.595050 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:28:06.094410 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:28:08.591958 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:28:10.593138 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:28:12.593370 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:28:15.093337 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:28:17.593583 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:28:20.093257 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:28:22.593250 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:28:25.092741 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:28:27.098752 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:28:29.593024 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:28:32.094121 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:28:34.103261 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:28:36.593128 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:28:38.593462 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:28:41.091749 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:28:43.592360 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:28:45.595526 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:28:48.093132 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:28:50.592196 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:28:52.593320 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:28:55.093516 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:28:57.093911 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:28:59.593224 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:29:01.594270 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:29:04.092242 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:29:06.092347 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:29:08.592336 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:29:10.592427 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:29:13.092295 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:29:15.093464 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:29:17.592666 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:29:20.093234 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:29:22.093400 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:29:24.593422 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:29:26.594636 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:29:29.093768 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:29:31.591467 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:29:33.600997 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:29:36.098155 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:29:38.592286 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:29:40.594048 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:29:43.092617 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:29:45.095519 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:29:47.594708 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:29:50.095158 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:29:52.100030 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:29:54.592366 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:29:57.092221 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:29:59.593015 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:30:01.595873 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:30:04.093251 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:30:06.591971 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:30:08.592776 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:30:11.092451 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:30:13.594829 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:30:16.094581 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:30:18.595415 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:30:21.092452 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:30:23.093188 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:30:25.104796 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:30:27.592620 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:30:29.594193 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:30:32.091750 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:30:34.592357 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
	I0407 13:30:35.093206 1095137 pod_ready.go:82] duration metric: took 4m0.006638227s for pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace to be "Ready" ...
	E0407 13:30:35.093237 1095137 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0407 13:30:35.093259 1095137 pod_ready.go:39] duration metric: took 5m20.753795595s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 13:30:35.093276 1095137 api_server.go:52] waiting for apiserver process to appear ...
	I0407 13:30:35.093322 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:30:35.093383 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:30:35.142863 1095137 cri.go:89] found id: "a1ef4f8376e9fc950587e9d4daa95427d19c48e4606813b7ce87102e58fd46af"
	I0407 13:30:35.142893 1095137 cri.go:89] found id: "d89f223f22b8606c12ec0e778fa21d2abe90349e214332d4a1fa32b45f514d9b"
	I0407 13:30:35.142899 1095137 cri.go:89] found id: ""
	I0407 13:30:35.142907 1095137 logs.go:282] 2 containers: [a1ef4f8376e9fc950587e9d4daa95427d19c48e4606813b7ce87102e58fd46af d89f223f22b8606c12ec0e778fa21d2abe90349e214332d4a1fa32b45f514d9b]
	I0407 13:30:35.143001 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:35.147050 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:35.150870 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0407 13:30:35.150942 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:30:35.190454 1095137 cri.go:89] found id: "5864d99cdd47d055aee6b1526bb1c6a1ee5862850613e2d63e0d93e141c7f4d5"
	I0407 13:30:35.190476 1095137 cri.go:89] found id: "ad3658b16b26420a5c8d95f062de8e38e1c9d8d5235a77bf2a50e7228aabb735"
	I0407 13:30:35.190482 1095137 cri.go:89] found id: ""
	I0407 13:30:35.190489 1095137 logs.go:282] 2 containers: [5864d99cdd47d055aee6b1526bb1c6a1ee5862850613e2d63e0d93e141c7f4d5 ad3658b16b26420a5c8d95f062de8e38e1c9d8d5235a77bf2a50e7228aabb735]
	I0407 13:30:35.190556 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:35.194338 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:35.198054 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0407 13:30:35.198130 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:30:35.243104 1095137 cri.go:89] found id: "051642364b305742215427b11bf59d630eb773d7d2d8489872ccb8bafb97a33a"
	I0407 13:30:35.243125 1095137 cri.go:89] found id: "e8606d211bc70838dc98631de3d337f0689089d54c4dd0963c66fed159c72bce"
	I0407 13:30:35.243130 1095137 cri.go:89] found id: ""
	I0407 13:30:35.243137 1095137 logs.go:282] 2 containers: [051642364b305742215427b11bf59d630eb773d7d2d8489872ccb8bafb97a33a e8606d211bc70838dc98631de3d337f0689089d54c4dd0963c66fed159c72bce]
	I0407 13:30:35.243196 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:35.246980 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:35.250601 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:30:35.250676 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:30:35.290776 1095137 cri.go:89] found id: "d6760ad08162f8ec381c7feef6ca753faf89ef2bb6b6aaf4ad3420559e5f73e7"
	I0407 13:30:35.290800 1095137 cri.go:89] found id: "c6f6d481b0f4c36b1a380741fe1a1dc69435fb844001d3a69feaa1fddcdd4030"
	I0407 13:30:35.290805 1095137 cri.go:89] found id: ""
	I0407 13:30:35.290813 1095137 logs.go:282] 2 containers: [d6760ad08162f8ec381c7feef6ca753faf89ef2bb6b6aaf4ad3420559e5f73e7 c6f6d481b0f4c36b1a380741fe1a1dc69435fb844001d3a69feaa1fddcdd4030]
	I0407 13:30:35.290924 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:35.294717 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:35.298053 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:30:35.298125 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:30:35.339141 1095137 cri.go:89] found id: "74af9023b4fde2bd349b21c976719ba85dda1ea03171badab844478b8cbf2088"
	I0407 13:30:35.339176 1095137 cri.go:89] found id: "77f2619b2d1aad99f71633fed74393b3b19f570432847d24bea319ce3f3cc54b"
	I0407 13:30:35.339182 1095137 cri.go:89] found id: ""
	I0407 13:30:35.339192 1095137 logs.go:282] 2 containers: [74af9023b4fde2bd349b21c976719ba85dda1ea03171badab844478b8cbf2088 77f2619b2d1aad99f71633fed74393b3b19f570432847d24bea319ce3f3cc54b]
	I0407 13:30:35.339260 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:35.343444 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:35.347381 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:30:35.347466 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:30:35.386505 1095137 cri.go:89] found id: "04c3bb1dfe7c8d458ce5ec56757e932f8a599e0d467221dd4b85ef6a3fefa6df"
	I0407 13:30:35.386572 1095137 cri.go:89] found id: "2b349ae2ec417a9d000e61e8c66afdcccb2d202d992eec33826df20e410f87a2"
	I0407 13:30:35.386590 1095137 cri.go:89] found id: ""
	I0407 13:30:35.386605 1095137 logs.go:282] 2 containers: [04c3bb1dfe7c8d458ce5ec56757e932f8a599e0d467221dd4b85ef6a3fefa6df 2b349ae2ec417a9d000e61e8c66afdcccb2d202d992eec33826df20e410f87a2]
	I0407 13:30:35.386672 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:35.391142 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:35.395064 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0407 13:30:35.395142 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:30:35.434125 1095137 cri.go:89] found id: "e4568189822d430c58ad9f0a391ab15cbb27395e1408248c2c03f19d5dc9150b"
	I0407 13:30:35.434150 1095137 cri.go:89] found id: "b2ada56c528ba0083dc91efb8ea35b5b75cc614fa96a6cd93819b45a72ee05fb"
	I0407 13:30:35.434156 1095137 cri.go:89] found id: ""
	I0407 13:30:35.434163 1095137 logs.go:282] 2 containers: [e4568189822d430c58ad9f0a391ab15cbb27395e1408248c2c03f19d5dc9150b b2ada56c528ba0083dc91efb8ea35b5b75cc614fa96a6cd93819b45a72ee05fb]
	I0407 13:30:35.434247 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:35.438141 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:35.441512 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0407 13:30:35.441726 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0407 13:30:35.481889 1095137 cri.go:89] found id: "2c994f7c46244b75a5d9fce69b3c3f15c9a2ce0a534a675952a6506d4c18ba61"
	I0407 13:30:35.481955 1095137 cri.go:89] found id: "d05c978cfa5a3d8c7465b05f0087ddff9b59b6291d3e17156779dbb770748849"
	I0407 13:30:35.481974 1095137 cri.go:89] found id: ""
	I0407 13:30:35.481998 1095137 logs.go:282] 2 containers: [2c994f7c46244b75a5d9fce69b3c3f15c9a2ce0a534a675952a6506d4c18ba61 d05c978cfa5a3d8c7465b05f0087ddff9b59b6291d3e17156779dbb770748849]
	I0407 13:30:35.482078 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:35.485908 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:35.489672 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:30:35.489809 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:30:35.530554 1095137 cri.go:89] found id: "3e668dbcb4dd3fef6e58de6d11f5522dbfad76947bfe7bab5fda3d0228b2f625"
	I0407 13:30:35.530621 1095137 cri.go:89] found id: ""
	I0407 13:30:35.530643 1095137 logs.go:282] 1 containers: [3e668dbcb4dd3fef6e58de6d11f5522dbfad76947bfe7bab5fda3d0228b2f625]
	I0407 13:30:35.530739 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:35.534295 1095137 logs.go:123] Gathering logs for etcd [5864d99cdd47d055aee6b1526bb1c6a1ee5862850613e2d63e0d93e141c7f4d5] ...
	I0407 13:30:35.534319 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5864d99cdd47d055aee6b1526bb1c6a1ee5862850613e2d63e0d93e141c7f4d5"
	I0407 13:30:35.584074 1095137 logs.go:123] Gathering logs for kube-proxy [74af9023b4fde2bd349b21c976719ba85dda1ea03171badab844478b8cbf2088] ...
	I0407 13:30:35.584106 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74af9023b4fde2bd349b21c976719ba85dda1ea03171badab844478b8cbf2088"
	I0407 13:30:35.624129 1095137 logs.go:123] Gathering logs for kindnet [b2ada56c528ba0083dc91efb8ea35b5b75cc614fa96a6cd93819b45a72ee05fb] ...
	I0407 13:30:35.624158 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b2ada56c528ba0083dc91efb8ea35b5b75cc614fa96a6cd93819b45a72ee05fb"
	I0407 13:30:35.665670 1095137 logs.go:123] Gathering logs for container status ...
	I0407 13:30:35.665751 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:30:35.721123 1095137 logs.go:123] Gathering logs for kube-apiserver [d89f223f22b8606c12ec0e778fa21d2abe90349e214332d4a1fa32b45f514d9b] ...
	I0407 13:30:35.721154 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d89f223f22b8606c12ec0e778fa21d2abe90349e214332d4a1fa32b45f514d9b"
	I0407 13:30:35.776096 1095137 logs.go:123] Gathering logs for kube-scheduler [d6760ad08162f8ec381c7feef6ca753faf89ef2bb6b6aaf4ad3420559e5f73e7] ...
	I0407 13:30:35.776130 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6760ad08162f8ec381c7feef6ca753faf89ef2bb6b6aaf4ad3420559e5f73e7"
	I0407 13:30:35.819279 1095137 logs.go:123] Gathering logs for kube-scheduler [c6f6d481b0f4c36b1a380741fe1a1dc69435fb844001d3a69feaa1fddcdd4030] ...
	I0407 13:30:35.819309 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6f6d481b0f4c36b1a380741fe1a1dc69435fb844001d3a69feaa1fddcdd4030"
	I0407 13:30:35.872048 1095137 logs.go:123] Gathering logs for kube-apiserver [a1ef4f8376e9fc950587e9d4daa95427d19c48e4606813b7ce87102e58fd46af] ...
	I0407 13:30:35.872080 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1ef4f8376e9fc950587e9d4daa95427d19c48e4606813b7ce87102e58fd46af"
	I0407 13:30:35.957224 1095137 logs.go:123] Gathering logs for etcd [ad3658b16b26420a5c8d95f062de8e38e1c9d8d5235a77bf2a50e7228aabb735] ...
	I0407 13:30:35.957262 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad3658b16b26420a5c8d95f062de8e38e1c9d8d5235a77bf2a50e7228aabb735"
	I0407 13:30:36.000405 1095137 logs.go:123] Gathering logs for coredns [051642364b305742215427b11bf59d630eb773d7d2d8489872ccb8bafb97a33a] ...
	I0407 13:30:36.000487 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 051642364b305742215427b11bf59d630eb773d7d2d8489872ccb8bafb97a33a"
	I0407 13:30:36.045995 1095137 logs.go:123] Gathering logs for coredns [e8606d211bc70838dc98631de3d337f0689089d54c4dd0963c66fed159c72bce] ...
	I0407 13:30:36.046027 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8606d211bc70838dc98631de3d337f0689089d54c4dd0963c66fed159c72bce"
	I0407 13:30:36.101062 1095137 logs.go:123] Gathering logs for storage-provisioner [2c994f7c46244b75a5d9fce69b3c3f15c9a2ce0a534a675952a6506d4c18ba61] ...
	I0407 13:30:36.101096 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c994f7c46244b75a5d9fce69b3c3f15c9a2ce0a534a675952a6506d4c18ba61"
	I0407 13:30:36.151700 1095137 logs.go:123] Gathering logs for storage-provisioner [d05c978cfa5a3d8c7465b05f0087ddff9b59b6291d3e17156779dbb770748849] ...
	I0407 13:30:36.151732 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d05c978cfa5a3d8c7465b05f0087ddff9b59b6291d3e17156779dbb770748849"
	I0407 13:30:36.197533 1095137 logs.go:123] Gathering logs for kubelet ...
	I0407 13:30:36.197582 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0407 13:30:36.257383 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:14 old-k8s-version-856421 kubelet[667]: E0407 13:25:14.307390     667 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-856421" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-856421' and this object
	W0407 13:30:36.257840 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:14 old-k8s-version-856421 kubelet[667]: E0407 13:25:14.307927     667 reflector.go:138] object-"kube-system"/"kube-proxy-token-j6crq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-j6crq" is forbidden: User "system:node:old-k8s-version-856421" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-856421' and this object
	W0407 13:30:36.258057 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:14 old-k8s-version-856421 kubelet[667]: E0407 13:25:14.308116     667 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-856421" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-856421' and this object
	W0407 13:30:36.258285 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:14 old-k8s-version-856421 kubelet[667]: E0407 13:25:14.308292     667 reflector.go:138] object-"kube-system"/"storage-provisioner-token-nvxlj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-nvxlj" is forbidden: User "system:node:old-k8s-version-856421" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-856421' and this object
	W0407 13:30:36.258499 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:14 old-k8s-version-856421 kubelet[667]: E0407 13:25:14.308445     667 reflector.go:138] object-"default"/"default-token-znh7g": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-znh7g" is forbidden: User "system:node:old-k8s-version-856421" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-856421' and this object
	W0407 13:30:36.258716 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:14 old-k8s-version-856421 kubelet[667]: E0407 13:25:14.308590     667 reflector.go:138] object-"kube-system"/"kindnet-token-fxnc5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-fxnc5" is forbidden: User "system:node:old-k8s-version-856421" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-856421' and this object
	W0407 13:30:36.258927 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:14 old-k8s-version-856421 kubelet[667]: E0407 13:25:14.308753     667 reflector.go:138] object-"kube-system"/"coredns-token-sjxkg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-sjxkg" is forbidden: User "system:node:old-k8s-version-856421" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-856421' and this object
	W0407 13:30:36.265181 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:15 old-k8s-version-856421 kubelet[667]: E0407 13:25:15.094522     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0407 13:30:36.269525 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:16 old-k8s-version-856421 kubelet[667]: E0407 13:25:16.056738     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:36.273083 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:27 old-k8s-version-856421 kubelet[667]: E0407 13:25:27.890508     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0407 13:30:36.274796 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:40 old-k8s-version-856421 kubelet[667]: E0407 13:25:40.901804     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:36.275384 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:42 old-k8s-version-856421 kubelet[667]: E0407 13:25:42.190528     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:36.276045 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:43 old-k8s-version-856421 kubelet[667]: E0407 13:25:43.194128     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:36.276493 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:47 old-k8s-version-856421 kubelet[667]: E0407 13:25:47.208836     667 pod_workers.go:191] Error syncing pod ffa09209-8141-4692-8b43-e212485a4adb ("storage-provisioner_kube-system(ffa09209-8141-4692-8b43-e212485a4adb)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ffa09209-8141-4692-8b43-e212485a4adb)"
	W0407 13:30:36.276818 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:49 old-k8s-version-856421 kubelet[667]: E0407 13:25:49.601173     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:36.279591 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:55 old-k8s-version-856421 kubelet[667]: E0407 13:25:55.894550     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0407 13:30:36.280308 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:02 old-k8s-version-856421 kubelet[667]: E0407 13:26:02.259589     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:36.280492 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:07 old-k8s-version-856421 kubelet[667]: E0407 13:26:07.882067     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:36.280816 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:09 old-k8s-version-856421 kubelet[667]: E0407 13:26:09.601119     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:36.280999 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:21 old-k8s-version-856421 kubelet[667]: E0407 13:26:21.882035     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:36.281582 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:23 old-k8s-version-856421 kubelet[667]: E0407 13:26:23.333979     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:36.281911 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:29 old-k8s-version-856421 kubelet[667]: E0407 13:26:29.601138     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:36.282096 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:33 old-k8s-version-856421 kubelet[667]: E0407 13:26:33.882060     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:36.282424 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:42 old-k8s-version-856421 kubelet[667]: E0407 13:26:42.882285     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:36.284867 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:46 old-k8s-version-856421 kubelet[667]: E0407 13:26:46.916880     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0407 13:30:36.285192 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:53 old-k8s-version-856421 kubelet[667]: E0407 13:26:53.881641     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:36.285375 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:58 old-k8s-version-856421 kubelet[667]: E0407 13:26:58.887165     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:36.285969 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:05 old-k8s-version-856421 kubelet[667]: E0407 13:27:05.451459     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:36.286294 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:09 old-k8s-version-856421 kubelet[667]: E0407 13:27:09.601083     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:36.286481 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:11 old-k8s-version-856421 kubelet[667]: E0407 13:27:11.882020     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:36.286805 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:22 old-k8s-version-856421 kubelet[667]: E0407 13:27:22.882870     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:36.286988 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:24 old-k8s-version-856421 kubelet[667]: E0407 13:27:24.883611     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:36.287316 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:33 old-k8s-version-856421 kubelet[667]: E0407 13:27:33.881645     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:36.287503 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:36 old-k8s-version-856421 kubelet[667]: E0407 13:27:36.883495     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:36.287827 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:46 old-k8s-version-856421 kubelet[667]: E0407 13:27:46.882237     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:36.288011 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:49 old-k8s-version-856421 kubelet[667]: E0407 13:27:49.882049     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:36.288193 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:00 old-k8s-version-856421 kubelet[667]: E0407 13:28:00.882106     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:36.288518 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:01 old-k8s-version-856421 kubelet[667]: E0407 13:28:01.881859     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:36.288842 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:13 old-k8s-version-856421 kubelet[667]: E0407 13:28:13.882356     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:36.291435 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:15 old-k8s-version-856421 kubelet[667]: E0407 13:28:15.895177     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0407 13:30:36.291774 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:24 old-k8s-version-856421 kubelet[667]: E0407 13:28:24.882233     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:36.291963 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:27 old-k8s-version-856421 kubelet[667]: E0407 13:28:27.882283     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:36.292546 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:36 old-k8s-version-856421 kubelet[667]: E0407 13:28:36.680281     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:36.292729 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:38 old-k8s-version-856421 kubelet[667]: E0407 13:28:38.882208     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:36.293054 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:39 old-k8s-version-856421 kubelet[667]: E0407 13:28:39.601171     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:36.293238 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:50 old-k8s-version-856421 kubelet[667]: E0407 13:28:50.882465     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:36.293561 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:52 old-k8s-version-856421 kubelet[667]: E0407 13:28:52.882220     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:36.293753 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:01 old-k8s-version-856421 kubelet[667]: E0407 13:29:01.882101     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:36.294080 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:04 old-k8s-version-856421 kubelet[667]: E0407 13:29:04.885771     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:36.294263 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:15 old-k8s-version-856421 kubelet[667]: E0407 13:29:15.882105     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:36.294592 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:19 old-k8s-version-856421 kubelet[667]: E0407 13:29:19.881643     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:36.294775 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:28 old-k8s-version-856421 kubelet[667]: E0407 13:29:28.884253     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:36.295099 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:32 old-k8s-version-856421 kubelet[667]: E0407 13:29:32.882068     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:36.295282 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:39 old-k8s-version-856421 kubelet[667]: E0407 13:29:39.883031     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:36.295613 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:47 old-k8s-version-856421 kubelet[667]: E0407 13:29:47.882527     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:36.295795 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:50 old-k8s-version-856421 kubelet[667]: E0407 13:29:50.882583     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:36.296119 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:59 old-k8s-version-856421 kubelet[667]: E0407 13:29:59.882436     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:36.296304 1095137 logs.go:138] Found kubelet problem: Apr 07 13:30:01 old-k8s-version-856421 kubelet[667]: E0407 13:30:01.885267     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:36.296653 1095137 logs.go:138] Found kubelet problem: Apr 07 13:30:10 old-k8s-version-856421 kubelet[667]: E0407 13:30:10.882871     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:36.296836 1095137 logs.go:138] Found kubelet problem: Apr 07 13:30:15 old-k8s-version-856421 kubelet[667]: E0407 13:30:15.882153     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:36.297171 1095137 logs.go:138] Found kubelet problem: Apr 07 13:30:22 old-k8s-version-856421 kubelet[667]: E0407 13:30:22.882311     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:36.297353 1095137 logs.go:138] Found kubelet problem: Apr 07 13:30:28 old-k8s-version-856421 kubelet[667]: E0407 13:30:28.882142     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:36.297778 1095137 logs.go:138] Found kubelet problem: Apr 07 13:30:35 old-k8s-version-856421 kubelet[667]: E0407 13:30:35.885355     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	I0407 13:30:36.297794 1095137 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:30:36.297813 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0407 13:30:36.492001 1095137 logs.go:123] Gathering logs for kube-proxy [77f2619b2d1aad99f71633fed74393b3b19f570432847d24bea319ce3f3cc54b] ...
	I0407 13:30:36.492107 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77f2619b2d1aad99f71633fed74393b3b19f570432847d24bea319ce3f3cc54b"
	I0407 13:30:36.541166 1095137 logs.go:123] Gathering logs for kube-controller-manager [04c3bb1dfe7c8d458ce5ec56757e932f8a599e0d467221dd4b85ef6a3fefa6df] ...
	I0407 13:30:36.541195 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04c3bb1dfe7c8d458ce5ec56757e932f8a599e0d467221dd4b85ef6a3fefa6df"
	I0407 13:30:36.602522 1095137 logs.go:123] Gathering logs for kube-controller-manager [2b349ae2ec417a9d000e61e8c66afdcccb2d202d992eec33826df20e410f87a2] ...
	I0407 13:30:36.602560 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b349ae2ec417a9d000e61e8c66afdcccb2d202d992eec33826df20e410f87a2"
	I0407 13:30:36.668156 1095137 logs.go:123] Gathering logs for kindnet [e4568189822d430c58ad9f0a391ab15cbb27395e1408248c2c03f19d5dc9150b] ...
	I0407 13:30:36.668194 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e4568189822d430c58ad9f0a391ab15cbb27395e1408248c2c03f19d5dc9150b"
	I0407 13:30:36.712474 1095137 logs.go:123] Gathering logs for kubernetes-dashboard [3e668dbcb4dd3fef6e58de6d11f5522dbfad76947bfe7bab5fda3d0228b2f625] ...
	I0407 13:30:36.712504 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e668dbcb4dd3fef6e58de6d11f5522dbfad76947bfe7bab5fda3d0228b2f625"
	I0407 13:30:36.751343 1095137 logs.go:123] Gathering logs for containerd ...
	I0407 13:30:36.751370 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0407 13:30:36.817010 1095137 logs.go:123] Gathering logs for dmesg ...
	I0407 13:30:36.817095 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:30:36.841811 1095137 out.go:358] Setting ErrFile to fd 2...
	I0407 13:30:36.841838 1095137 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0407 13:30:36.841885 1095137 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0407 13:30:36.841897 1095137 out.go:270]   Apr 07 13:30:10 old-k8s-version-856421 kubelet[667]: E0407 13:30:10.882871     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	  Apr 07 13:30:10 old-k8s-version-856421 kubelet[667]: E0407 13:30:10.882871     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:36.841903 1095137 out.go:270]   Apr 07 13:30:15 old-k8s-version-856421 kubelet[667]: E0407 13:30:15.882153     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Apr 07 13:30:15 old-k8s-version-856421 kubelet[667]: E0407 13:30:15.882153     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:36.841918 1095137 out.go:270]   Apr 07 13:30:22 old-k8s-version-856421 kubelet[667]: E0407 13:30:22.882311     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	  Apr 07 13:30:22 old-k8s-version-856421 kubelet[667]: E0407 13:30:22.882311     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:36.841924 1095137 out.go:270]   Apr 07 13:30:28 old-k8s-version-856421 kubelet[667]: E0407 13:30:28.882142     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Apr 07 13:30:28 old-k8s-version-856421 kubelet[667]: E0407 13:30:28.882142     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:36.841939 1095137 out.go:270]   Apr 07 13:30:35 old-k8s-version-856421 kubelet[667]: E0407 13:30:35.885355     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	  Apr 07 13:30:35 old-k8s-version-856421 kubelet[667]: E0407 13:30:35.885355     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	I0407 13:30:36.841946 1095137 out.go:358] Setting ErrFile to fd 2...
	I0407 13:30:36.841952 1095137 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:30:46.842901 1095137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:30:46.870854 1095137 api_server.go:72] duration metric: took 5m51.753710743s to wait for apiserver process to appear ...
	I0407 13:30:46.870880 1095137 api_server.go:88] waiting for apiserver healthz status ...
	I0407 13:30:46.870915 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:30:46.870969 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:30:46.986233 1095137 cri.go:89] found id: "a1ef4f8376e9fc950587e9d4daa95427d19c48e4606813b7ce87102e58fd46af"
	I0407 13:30:46.986252 1095137 cri.go:89] found id: "d89f223f22b8606c12ec0e778fa21d2abe90349e214332d4a1fa32b45f514d9b"
	I0407 13:30:46.986257 1095137 cri.go:89] found id: ""
	I0407 13:30:46.986264 1095137 logs.go:282] 2 containers: [a1ef4f8376e9fc950587e9d4daa95427d19c48e4606813b7ce87102e58fd46af d89f223f22b8606c12ec0e778fa21d2abe90349e214332d4a1fa32b45f514d9b]
	I0407 13:30:46.986340 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:46.990308 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:46.993838 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0407 13:30:46.993911 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:30:47.052280 1095137 cri.go:89] found id: "5864d99cdd47d055aee6b1526bb1c6a1ee5862850613e2d63e0d93e141c7f4d5"
	I0407 13:30:47.052300 1095137 cri.go:89] found id: "ad3658b16b26420a5c8d95f062de8e38e1c9d8d5235a77bf2a50e7228aabb735"
	I0407 13:30:47.052305 1095137 cri.go:89] found id: ""
	I0407 13:30:47.052313 1095137 logs.go:282] 2 containers: [5864d99cdd47d055aee6b1526bb1c6a1ee5862850613e2d63e0d93e141c7f4d5 ad3658b16b26420a5c8d95f062de8e38e1c9d8d5235a77bf2a50e7228aabb735]
	I0407 13:30:47.052369 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:47.056223 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:47.059720 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0407 13:30:47.059794 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:30:47.130170 1095137 cri.go:89] found id: "051642364b305742215427b11bf59d630eb773d7d2d8489872ccb8bafb97a33a"
	I0407 13:30:47.130191 1095137 cri.go:89] found id: "e8606d211bc70838dc98631de3d337f0689089d54c4dd0963c66fed159c72bce"
	I0407 13:30:47.130196 1095137 cri.go:89] found id: ""
	I0407 13:30:47.130204 1095137 logs.go:282] 2 containers: [051642364b305742215427b11bf59d630eb773d7d2d8489872ccb8bafb97a33a e8606d211bc70838dc98631de3d337f0689089d54c4dd0963c66fed159c72bce]
	I0407 13:30:47.130261 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:47.134245 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:47.143189 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:30:47.143271 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:30:47.202603 1095137 cri.go:89] found id: "d6760ad08162f8ec381c7feef6ca753faf89ef2bb6b6aaf4ad3420559e5f73e7"
	I0407 13:30:47.202625 1095137 cri.go:89] found id: "c6f6d481b0f4c36b1a380741fe1a1dc69435fb844001d3a69feaa1fddcdd4030"
	I0407 13:30:47.202630 1095137 cri.go:89] found id: ""
	I0407 13:30:47.202637 1095137 logs.go:282] 2 containers: [d6760ad08162f8ec381c7feef6ca753faf89ef2bb6b6aaf4ad3420559e5f73e7 c6f6d481b0f4c36b1a380741fe1a1dc69435fb844001d3a69feaa1fddcdd4030]
	I0407 13:30:47.202699 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:47.206762 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:47.210646 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:30:47.210745 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:30:47.284058 1095137 cri.go:89] found id: "74af9023b4fde2bd349b21c976719ba85dda1ea03171badab844478b8cbf2088"
	I0407 13:30:47.284131 1095137 cri.go:89] found id: "77f2619b2d1aad99f71633fed74393b3b19f570432847d24bea319ce3f3cc54b"
	I0407 13:30:47.284150 1095137 cri.go:89] found id: ""
	I0407 13:30:47.284173 1095137 logs.go:282] 2 containers: [74af9023b4fde2bd349b21c976719ba85dda1ea03171badab844478b8cbf2088 77f2619b2d1aad99f71633fed74393b3b19f570432847d24bea319ce3f3cc54b]
	I0407 13:30:47.284264 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:47.290441 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:47.294067 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:30:47.294179 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:30:47.342560 1095137 cri.go:89] found id: "04c3bb1dfe7c8d458ce5ec56757e932f8a599e0d467221dd4b85ef6a3fefa6df"
	I0407 13:30:47.342628 1095137 cri.go:89] found id: "2b349ae2ec417a9d000e61e8c66afdcccb2d202d992eec33826df20e410f87a2"
	I0407 13:30:47.342646 1095137 cri.go:89] found id: ""
	I0407 13:30:47.342669 1095137 logs.go:282] 2 containers: [04c3bb1dfe7c8d458ce5ec56757e932f8a599e0d467221dd4b85ef6a3fefa6df 2b349ae2ec417a9d000e61e8c66afdcccb2d202d992eec33826df20e410f87a2]
	I0407 13:30:47.342765 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:47.346752 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:47.351671 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0407 13:30:47.351794 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:30:47.412231 1095137 cri.go:89] found id: "e4568189822d430c58ad9f0a391ab15cbb27395e1408248c2c03f19d5dc9150b"
	I0407 13:30:47.412307 1095137 cri.go:89] found id: "b2ada56c528ba0083dc91efb8ea35b5b75cc614fa96a6cd93819b45a72ee05fb"
	I0407 13:30:47.412328 1095137 cri.go:89] found id: ""
	I0407 13:30:47.412350 1095137 logs.go:282] 2 containers: [e4568189822d430c58ad9f0a391ab15cbb27395e1408248c2c03f19d5dc9150b b2ada56c528ba0083dc91efb8ea35b5b75cc614fa96a6cd93819b45a72ee05fb]
	I0407 13:30:47.412437 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:47.416534 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:47.420684 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:30:47.420804 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:30:47.473376 1095137 cri.go:89] found id: "3e668dbcb4dd3fef6e58de6d11f5522dbfad76947bfe7bab5fda3d0228b2f625"
	I0407 13:30:47.473454 1095137 cri.go:89] found id: ""
	I0407 13:30:47.473475 1095137 logs.go:282] 1 containers: [3e668dbcb4dd3fef6e58de6d11f5522dbfad76947bfe7bab5fda3d0228b2f625]
	I0407 13:30:47.473560 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:47.477965 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0407 13:30:47.478087 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0407 13:30:47.526054 1095137 cri.go:89] found id: "2c994f7c46244b75a5d9fce69b3c3f15c9a2ce0a534a675952a6506d4c18ba61"
	I0407 13:30:47.526129 1095137 cri.go:89] found id: "d05c978cfa5a3d8c7465b05f0087ddff9b59b6291d3e17156779dbb770748849"
	I0407 13:30:47.526148 1095137 cri.go:89] found id: ""
	I0407 13:30:47.526170 1095137 logs.go:282] 2 containers: [2c994f7c46244b75a5d9fce69b3c3f15c9a2ce0a534a675952a6506d4c18ba61 d05c978cfa5a3d8c7465b05f0087ddff9b59b6291d3e17156779dbb770748849]
	I0407 13:30:47.526254 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:47.531086 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:47.534990 1095137 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:30:47.535062 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0407 13:30:47.736664 1095137 logs.go:123] Gathering logs for coredns [e8606d211bc70838dc98631de3d337f0689089d54c4dd0963c66fed159c72bce] ...
	I0407 13:30:47.736704 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8606d211bc70838dc98631de3d337f0689089d54c4dd0963c66fed159c72bce"
	I0407 13:30:47.789228 1095137 logs.go:123] Gathering logs for kubernetes-dashboard [3e668dbcb4dd3fef6e58de6d11f5522dbfad76947bfe7bab5fda3d0228b2f625] ...
	I0407 13:30:47.789262 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e668dbcb4dd3fef6e58de6d11f5522dbfad76947bfe7bab5fda3d0228b2f625"
	I0407 13:30:47.866453 1095137 logs.go:123] Gathering logs for storage-provisioner [d05c978cfa5a3d8c7465b05f0087ddff9b59b6291d3e17156779dbb770748849] ...
	I0407 13:30:47.866486 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d05c978cfa5a3d8c7465b05f0087ddff9b59b6291d3e17156779dbb770748849"
	I0407 13:30:47.912587 1095137 logs.go:123] Gathering logs for container status ...
	I0407 13:30:47.912618 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:30:47.991125 1095137 logs.go:123] Gathering logs for kubelet ...
	I0407 13:30:47.991154 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0407 13:30:48.065207 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:14 old-k8s-version-856421 kubelet[667]: E0407 13:25:14.307390     667 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-856421" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-856421' and this object
	W0407 13:30:48.065569 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:14 old-k8s-version-856421 kubelet[667]: E0407 13:25:14.307927     667 reflector.go:138] object-"kube-system"/"kube-proxy-token-j6crq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-j6crq" is forbidden: User "system:node:old-k8s-version-856421" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-856421' and this object
	W0407 13:30:48.065836 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:14 old-k8s-version-856421 kubelet[667]: E0407 13:25:14.308116     667 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-856421" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-856421' and this object
	W0407 13:30:48.066068 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:14 old-k8s-version-856421 kubelet[667]: E0407 13:25:14.308292     667 reflector.go:138] object-"kube-system"/"storage-provisioner-token-nvxlj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-nvxlj" is forbidden: User "system:node:old-k8s-version-856421" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-856421' and this object
	W0407 13:30:48.066279 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:14 old-k8s-version-856421 kubelet[667]: E0407 13:25:14.308445     667 reflector.go:138] object-"default"/"default-token-znh7g": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-znh7g" is forbidden: User "system:node:old-k8s-version-856421" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-856421' and this object
	W0407 13:30:48.066499 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:14 old-k8s-version-856421 kubelet[667]: E0407 13:25:14.308590     667 reflector.go:138] object-"kube-system"/"kindnet-token-fxnc5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-fxnc5" is forbidden: User "system:node:old-k8s-version-856421" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-856421' and this object
	W0407 13:30:48.066775 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:14 old-k8s-version-856421 kubelet[667]: E0407 13:25:14.308753     667 reflector.go:138] object-"kube-system"/"coredns-token-sjxkg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-sjxkg" is forbidden: User "system:node:old-k8s-version-856421" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-856421' and this object
	W0407 13:30:48.072894 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:15 old-k8s-version-856421 kubelet[667]: E0407 13:25:15.094522     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0407 13:30:48.078579 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:16 old-k8s-version-856421 kubelet[667]: E0407 13:25:16.056738     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.082673 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:27 old-k8s-version-856421 kubelet[667]: E0407 13:25:27.890508     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0407 13:30:48.084370 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:40 old-k8s-version-856421 kubelet[667]: E0407 13:25:40.901804     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.084966 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:42 old-k8s-version-856421 kubelet[667]: E0407 13:25:42.190528     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.085629 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:43 old-k8s-version-856421 kubelet[667]: E0407 13:25:43.194128     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.086140 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:47 old-k8s-version-856421 kubelet[667]: E0407 13:25:47.208836     667 pod_workers.go:191] Error syncing pod ffa09209-8141-4692-8b43-e212485a4adb ("storage-provisioner_kube-system(ffa09209-8141-4692-8b43-e212485a4adb)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ffa09209-8141-4692-8b43-e212485a4adb)"
	W0407 13:30:48.086480 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:49 old-k8s-version-856421 kubelet[667]: E0407 13:25:49.601173     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.089328 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:55 old-k8s-version-856421 kubelet[667]: E0407 13:25:55.894550     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0407 13:30:48.090071 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:02 old-k8s-version-856421 kubelet[667]: E0407 13:26:02.259589     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.090263 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:07 old-k8s-version-856421 kubelet[667]: E0407 13:26:07.882067     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.090596 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:09 old-k8s-version-856421 kubelet[667]: E0407 13:26:09.601119     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.090781 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:21 old-k8s-version-856421 kubelet[667]: E0407 13:26:21.882035     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.091367 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:23 old-k8s-version-856421 kubelet[667]: E0407 13:26:23.333979     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.091693 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:29 old-k8s-version-856421 kubelet[667]: E0407 13:26:29.601138     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.091878 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:33 old-k8s-version-856421 kubelet[667]: E0407 13:26:33.882060     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.092204 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:42 old-k8s-version-856421 kubelet[667]: E0407 13:26:42.882285     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.094653 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:46 old-k8s-version-856421 kubelet[667]: E0407 13:26:46.916880     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0407 13:30:48.094984 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:53 old-k8s-version-856421 kubelet[667]: E0407 13:26:53.881641     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.095172 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:58 old-k8s-version-856421 kubelet[667]: E0407 13:26:58.887165     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.095764 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:05 old-k8s-version-856421 kubelet[667]: E0407 13:27:05.451459     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.096091 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:09 old-k8s-version-856421 kubelet[667]: E0407 13:27:09.601083     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.096275 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:11 old-k8s-version-856421 kubelet[667]: E0407 13:27:11.882020     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.096603 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:22 old-k8s-version-856421 kubelet[667]: E0407 13:27:22.882870     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.096788 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:24 old-k8s-version-856421 kubelet[667]: E0407 13:27:24.883611     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.097167 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:33 old-k8s-version-856421 kubelet[667]: E0407 13:27:33.881645     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.097363 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:36 old-k8s-version-856421 kubelet[667]: E0407 13:27:36.883495     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.097692 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:46 old-k8s-version-856421 kubelet[667]: E0407 13:27:46.882237     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.097891 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:49 old-k8s-version-856421 kubelet[667]: E0407 13:27:49.882049     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.098077 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:00 old-k8s-version-856421 kubelet[667]: E0407 13:28:00.882106     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.098408 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:01 old-k8s-version-856421 kubelet[667]: E0407 13:28:01.881859     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.098735 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:13 old-k8s-version-856421 kubelet[667]: E0407 13:28:13.882356     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.101174 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:15 old-k8s-version-856421 kubelet[667]: E0407 13:28:15.895177     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0407 13:30:48.101500 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:24 old-k8s-version-856421 kubelet[667]: E0407 13:28:24.882233     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.101683 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:27 old-k8s-version-856421 kubelet[667]: E0407 13:28:27.882283     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.102314 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:36 old-k8s-version-856421 kubelet[667]: E0407 13:28:36.680281     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.102503 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:38 old-k8s-version-856421 kubelet[667]: E0407 13:28:38.882208     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.102831 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:39 old-k8s-version-856421 kubelet[667]: E0407 13:28:39.601171     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.103015 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:50 old-k8s-version-856421 kubelet[667]: E0407 13:28:50.882465     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.103343 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:52 old-k8s-version-856421 kubelet[667]: E0407 13:28:52.882220     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.103529 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:01 old-k8s-version-856421 kubelet[667]: E0407 13:29:01.882101     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.103856 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:04 old-k8s-version-856421 kubelet[667]: E0407 13:29:04.885771     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.104040 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:15 old-k8s-version-856421 kubelet[667]: E0407 13:29:15.882105     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.104366 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:19 old-k8s-version-856421 kubelet[667]: E0407 13:29:19.881643     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.104552 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:28 old-k8s-version-856421 kubelet[667]: E0407 13:29:28.884253     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.105009 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:32 old-k8s-version-856421 kubelet[667]: E0407 13:29:32.882068     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.105201 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:39 old-k8s-version-856421 kubelet[667]: E0407 13:29:39.883031     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.105541 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:47 old-k8s-version-856421 kubelet[667]: E0407 13:29:47.882527     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.105739 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:50 old-k8s-version-856421 kubelet[667]: E0407 13:29:50.882583     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.106067 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:59 old-k8s-version-856421 kubelet[667]: E0407 13:29:59.882436     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.106251 1095137 logs.go:138] Found kubelet problem: Apr 07 13:30:01 old-k8s-version-856421 kubelet[667]: E0407 13:30:01.885267     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.106586 1095137 logs.go:138] Found kubelet problem: Apr 07 13:30:10 old-k8s-version-856421 kubelet[667]: E0407 13:30:10.882871     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.106770 1095137 logs.go:138] Found kubelet problem: Apr 07 13:30:15 old-k8s-version-856421 kubelet[667]: E0407 13:30:15.882153     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.107101 1095137 logs.go:138] Found kubelet problem: Apr 07 13:30:22 old-k8s-version-856421 kubelet[667]: E0407 13:30:22.882311     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.107285 1095137 logs.go:138] Found kubelet problem: Apr 07 13:30:28 old-k8s-version-856421 kubelet[667]: E0407 13:30:28.882142     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.107610 1095137 logs.go:138] Found kubelet problem: Apr 07 13:30:35 old-k8s-version-856421 kubelet[667]: E0407 13:30:35.885355     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.107794 1095137 logs.go:138] Found kubelet problem: Apr 07 13:30:41 old-k8s-version-856421 kubelet[667]: E0407 13:30:41.882269     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0407 13:30:48.107807 1095137 logs.go:123] Gathering logs for kube-proxy [77f2619b2d1aad99f71633fed74393b3b19f570432847d24bea319ce3f3cc54b] ...
	I0407 13:30:48.107822 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77f2619b2d1aad99f71633fed74393b3b19f570432847d24bea319ce3f3cc54b"
	I0407 13:30:48.156575 1095137 logs.go:123] Gathering logs for kindnet [e4568189822d430c58ad9f0a391ab15cbb27395e1408248c2c03f19d5dc9150b] ...
	I0407 13:30:48.156606 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e4568189822d430c58ad9f0a391ab15cbb27395e1408248c2c03f19d5dc9150b"
	I0407 13:30:48.232444 1095137 logs.go:123] Gathering logs for kindnet [b2ada56c528ba0083dc91efb8ea35b5b75cc614fa96a6cd93819b45a72ee05fb] ...
	I0407 13:30:48.232472 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b2ada56c528ba0083dc91efb8ea35b5b75cc614fa96a6cd93819b45a72ee05fb"
	I0407 13:30:48.305914 1095137 logs.go:123] Gathering logs for kube-apiserver [a1ef4f8376e9fc950587e9d4daa95427d19c48e4606813b7ce87102e58fd46af] ...
	I0407 13:30:48.305993 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1ef4f8376e9fc950587e9d4daa95427d19c48e4606813b7ce87102e58fd46af"
	I0407 13:30:48.379011 1095137 logs.go:123] Gathering logs for kube-apiserver [d89f223f22b8606c12ec0e778fa21d2abe90349e214332d4a1fa32b45f514d9b] ...
	I0407 13:30:48.379086 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d89f223f22b8606c12ec0e778fa21d2abe90349e214332d4a1fa32b45f514d9b"
	I0407 13:30:48.462552 1095137 logs.go:123] Gathering logs for etcd [5864d99cdd47d055aee6b1526bb1c6a1ee5862850613e2d63e0d93e141c7f4d5] ...
	I0407 13:30:48.462584 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5864d99cdd47d055aee6b1526bb1c6a1ee5862850613e2d63e0d93e141c7f4d5"
	I0407 13:30:48.528785 1095137 logs.go:123] Gathering logs for kube-scheduler [c6f6d481b0f4c36b1a380741fe1a1dc69435fb844001d3a69feaa1fddcdd4030] ...
	I0407 13:30:48.528974 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6f6d481b0f4c36b1a380741fe1a1dc69435fb844001d3a69feaa1fddcdd4030"
	I0407 13:30:48.589264 1095137 logs.go:123] Gathering logs for kube-controller-manager [04c3bb1dfe7c8d458ce5ec56757e932f8a599e0d467221dd4b85ef6a3fefa6df] ...
	I0407 13:30:48.589336 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04c3bb1dfe7c8d458ce5ec56757e932f8a599e0d467221dd4b85ef6a3fefa6df"
	I0407 13:30:48.680565 1095137 logs.go:123] Gathering logs for kube-controller-manager [2b349ae2ec417a9d000e61e8c66afdcccb2d202d992eec33826df20e410f87a2] ...
	I0407 13:30:48.680604 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b349ae2ec417a9d000e61e8c66afdcccb2d202d992eec33826df20e410f87a2"
	I0407 13:30:48.779599 1095137 logs.go:123] Gathering logs for storage-provisioner [2c994f7c46244b75a5d9fce69b3c3f15c9a2ce0a534a675952a6506d4c18ba61] ...
	I0407 13:30:48.779675 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c994f7c46244b75a5d9fce69b3c3f15c9a2ce0a534a675952a6506d4c18ba61"
	I0407 13:30:48.835392 1095137 logs.go:123] Gathering logs for containerd ...
	I0407 13:30:48.835418 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0407 13:30:48.929349 1095137 logs.go:123] Gathering logs for dmesg ...
	I0407 13:30:48.929382 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:30:48.954396 1095137 logs.go:123] Gathering logs for etcd [ad3658b16b26420a5c8d95f062de8e38e1c9d8d5235a77bf2a50e7228aabb735] ...
	I0407 13:30:48.954423 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad3658b16b26420a5c8d95f062de8e38e1c9d8d5235a77bf2a50e7228aabb735"
	I0407 13:30:49.030928 1095137 logs.go:123] Gathering logs for coredns [051642364b305742215427b11bf59d630eb773d7d2d8489872ccb8bafb97a33a] ...
	I0407 13:30:49.031024 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 051642364b305742215427b11bf59d630eb773d7d2d8489872ccb8bafb97a33a"
	I0407 13:30:49.110624 1095137 logs.go:123] Gathering logs for kube-scheduler [d6760ad08162f8ec381c7feef6ca753faf89ef2bb6b6aaf4ad3420559e5f73e7] ...
	I0407 13:30:49.110700 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6760ad08162f8ec381c7feef6ca753faf89ef2bb6b6aaf4ad3420559e5f73e7"
	I0407 13:30:49.161794 1095137 logs.go:123] Gathering logs for kube-proxy [74af9023b4fde2bd349b21c976719ba85dda1ea03171badab844478b8cbf2088] ...
	I0407 13:30:49.161888 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74af9023b4fde2bd349b21c976719ba85dda1ea03171badab844478b8cbf2088"
	I0407 13:30:49.226058 1095137 out.go:358] Setting ErrFile to fd 2...
	I0407 13:30:49.226135 1095137 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0407 13:30:49.226216 1095137 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0407 13:30:49.226386 1095137 out.go:270]   Apr 07 13:30:15 old-k8s-version-856421 kubelet[667]: E0407 13:30:15.882153     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Apr 07 13:30:15 old-k8s-version-856421 kubelet[667]: E0407 13:30:15.882153     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:49.226426 1095137 out.go:270]   Apr 07 13:30:22 old-k8s-version-856421 kubelet[667]: E0407 13:30:22.882311     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	  Apr 07 13:30:22 old-k8s-version-856421 kubelet[667]: E0407 13:30:22.882311     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:49.226481 1095137 out.go:270]   Apr 07 13:30:28 old-k8s-version-856421 kubelet[667]: E0407 13:30:28.882142     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Apr 07 13:30:28 old-k8s-version-856421 kubelet[667]: E0407 13:30:28.882142     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:49.226514 1095137 out.go:270]   Apr 07 13:30:35 old-k8s-version-856421 kubelet[667]: E0407 13:30:35.885355     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	  Apr 07 13:30:35 old-k8s-version-856421 kubelet[667]: E0407 13:30:35.885355     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:49.226557 1095137 out.go:270]   Apr 07 13:30:41 old-k8s-version-856421 kubelet[667]: E0407 13:30:41.882269     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Apr 07 13:30:41 old-k8s-version-856421 kubelet[667]: E0407 13:30:41.882269     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0407 13:30:49.226602 1095137 out.go:358] Setting ErrFile to fd 2...
	I0407 13:30:49.226640 1095137 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:30:59.228033 1095137 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0407 13:30:59.239760 1095137 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0407 13:30:59.244653 1095137 out.go:201] 
	W0407 13:30:59.247535 1095137 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0407 13:30:59.247763 1095137 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0407 13:30:59.247829 1095137 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0407 13:30:59.247882 1095137 out.go:270] * 
	* 
	W0407 13:30:59.248818 1095137 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0407 13:30:59.252447 1095137 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-856421 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-856421
helpers_test.go:235: (dbg) docker inspect old-k8s-version-856421:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0ec7499281b1af3975317f28c2f45ab23ffdf6f53892f067820d159417dc17d0",
	        "Created": "2025-04-07T13:21:37.743742867Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1095306,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-04-07T13:24:46.516691813Z",
	            "FinishedAt": "2025-04-07T13:24:45.494837311Z"
	        },
	        "Image": "sha256:1a97cd9e9bbab266425b883d3ed87fee4969302ed9a49ce4df4bf460f6bbf404",
	        "ResolvConfPath": "/var/lib/docker/containers/0ec7499281b1af3975317f28c2f45ab23ffdf6f53892f067820d159417dc17d0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0ec7499281b1af3975317f28c2f45ab23ffdf6f53892f067820d159417dc17d0/hostname",
	        "HostsPath": "/var/lib/docker/containers/0ec7499281b1af3975317f28c2f45ab23ffdf6f53892f067820d159417dc17d0/hosts",
	        "LogPath": "/var/lib/docker/containers/0ec7499281b1af3975317f28c2f45ab23ffdf6f53892f067820d159417dc17d0/0ec7499281b1af3975317f28c2f45ab23ffdf6f53892f067820d159417dc17d0-json.log",
	        "Name": "/old-k8s-version-856421",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-856421:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-856421",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0ec7499281b1af3975317f28c2f45ab23ffdf6f53892f067820d159417dc17d0",
	                "LowerDir": "/var/lib/docker/overlay2/3f8cb1cfa6829451e2d68ed2a44b6f349cb628ec76ddd639b24aac6efa846b9f-init/diff:/var/lib/docker/overlay2/85f90d92e092517cca50dbac98636b783956eaa528934db46fb23992a850b0ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3f8cb1cfa6829451e2d68ed2a44b6f349cb628ec76ddd639b24aac6efa846b9f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3f8cb1cfa6829451e2d68ed2a44b6f349cb628ec76ddd639b24aac6efa846b9f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3f8cb1cfa6829451e2d68ed2a44b6f349cb628ec76ddd639b24aac6efa846b9f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-856421",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-856421/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-856421",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-856421",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-856421",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c93c015a8f4613610fe06f13d793b5e51fad2752271eba1152ee8674fb2da0ea",
	            "SandboxKey": "/var/run/docker/netns/c93c015a8f46",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34180"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34181"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34184"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34182"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34183"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-856421": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:54:29:5c:23:cd",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "84e74c9d326f8cf1ccd793b6c9408d565d75088ea9d7271ce39b18e3801f5b6e",
	                    "EndpointID": "23bae7889b8bb363aa570e2103614084d7f01195d0aa49291700a361d986a40a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-856421",
	                        "0ec7499281b1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-856421 -n old-k8s-version-856421
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-856421 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-856421 logs -n 25: (2.966783748s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | cert-options-839524 ssh                                | cert-options-839524    | jenkins | v1.35.0 | 07 Apr 25 13:21 UTC | 07 Apr 25 13:21 UTC |
	|         | openssl x509 -text -noout -in                          |                        |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                        |         |         |                     |                     |
	| ssh     | -p cert-options-839524 -- sudo                         | cert-options-839524    | jenkins | v1.35.0 | 07 Apr 25 13:21 UTC | 07 Apr 25 13:21 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                        |         |         |                     |                     |
	| delete  | -p cert-options-839524                                 | cert-options-839524    | jenkins | v1.35.0 | 07 Apr 25 13:21 UTC | 07 Apr 25 13:21 UTC |
	| start   | -p old-k8s-version-856421                              | old-k8s-version-856421 | jenkins | v1.35.0 | 07 Apr 25 13:21 UTC | 07 Apr 25 13:24 UTC |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                        |         |         |                     |                     |
	| start   | -p cert-expiration-618228                              | cert-expiration-618228 | jenkins | v1.35.0 | 07 Apr 25 13:22 UTC | 07 Apr 25 13:23 UTC |
	|         | --memory=2048                                          |                        |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                        |         |         |                     |                     |
	| delete  | -p cert-expiration-618228                              | cert-expiration-618228 | jenkins | v1.35.0 | 07 Apr 25 13:23 UTC | 07 Apr 25 13:23 UTC |
	| start   | -p no-preload-789804                                   | no-preload-789804      | jenkins | v1.35.0 | 07 Apr 25 13:23 UTC | 07 Apr 25 13:24 UTC |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr                                      |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-789804             | no-preload-789804      | jenkins | v1.35.0 | 07 Apr 25 13:24 UTC | 07 Apr 25 13:24 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p no-preload-789804                                   | no-preload-789804      | jenkins | v1.35.0 | 07 Apr 25 13:24 UTC | 07 Apr 25 13:24 UTC |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-856421        | old-k8s-version-856421 | jenkins | v1.35.0 | 07 Apr 25 13:24 UTC | 07 Apr 25 13:24 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-856421                              | old-k8s-version-856421 | jenkins | v1.35.0 | 07 Apr 25 13:24 UTC | 07 Apr 25 13:24 UTC |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-789804                  | no-preload-789804      | jenkins | v1.35.0 | 07 Apr 25 13:24 UTC | 07 Apr 25 13:24 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p no-preload-789804                                   | no-preload-789804      | jenkins | v1.35.0 | 07 Apr 25 13:24 UTC | 07 Apr 25 13:29 UTC |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr                                      |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-856421             | old-k8s-version-856421 | jenkins | v1.35.0 | 07 Apr 25 13:24 UTC | 07 Apr 25 13:24 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-856421                              | old-k8s-version-856421 | jenkins | v1.35.0 | 07 Apr 25 13:24 UTC |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                        |         |         |                     |                     |
	| image   | no-preload-789804 image list                           | no-preload-789804      | jenkins | v1.35.0 | 07 Apr 25 13:29 UTC | 07 Apr 25 13:29 UTC |
	|         | --format=json                                          |                        |         |         |                     |                     |
	| pause   | -p no-preload-789804                                   | no-preload-789804      | jenkins | v1.35.0 | 07 Apr 25 13:29 UTC | 07 Apr 25 13:29 UTC |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| unpause | -p no-preload-789804                                   | no-preload-789804      | jenkins | v1.35.0 | 07 Apr 25 13:29 UTC | 07 Apr 25 13:29 UTC |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| delete  | -p no-preload-789804                                   | no-preload-789804      | jenkins | v1.35.0 | 07 Apr 25 13:29 UTC | 07 Apr 25 13:29 UTC |
	| delete  | -p no-preload-789804                                   | no-preload-789804      | jenkins | v1.35.0 | 07 Apr 25 13:29 UTC | 07 Apr 25 13:29 UTC |
	| start   | -p embed-certs-688390                                  | embed-certs-688390     | jenkins | v1.35.0 | 07 Apr 25 13:29 UTC | 07 Apr 25 13:30 UTC |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-688390            | embed-certs-688390     | jenkins | v1.35.0 | 07 Apr 25 13:30 UTC | 07 Apr 25 13:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p embed-certs-688390                                  | embed-certs-688390     | jenkins | v1.35.0 | 07 Apr 25 13:30 UTC | 07 Apr 25 13:30 UTC |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-688390                 | embed-certs-688390     | jenkins | v1.35.0 | 07 Apr 25 13:30 UTC | 07 Apr 25 13:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p embed-certs-688390                                  | embed-certs-688390     | jenkins | v1.35.0 | 07 Apr 25 13:30 UTC |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                        |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 13:30:41
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 13:30:41.565899 1107590 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:30:41.566046 1107590 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:30:41.566070 1107590 out.go:358] Setting ErrFile to fd 2...
	I0407 13:30:41.566092 1107590 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:30:41.566388 1107590 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-873072/.minikube/bin
	I0407 13:30:41.566803 1107590 out.go:352] Setting JSON to false
	I0407 13:30:41.567890 1107590 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":18786,"bootTime":1744013856,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0407 13:30:41.567962 1107590 start.go:139] virtualization:  
	I0407 13:30:41.572889 1107590 out.go:177] * [embed-certs-688390] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0407 13:30:41.576010 1107590 out.go:177]   - MINIKUBE_LOCATION=20602
	I0407 13:30:41.576036 1107590 notify.go:220] Checking for updates...
	I0407 13:30:41.579151 1107590 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 13:30:41.582075 1107590 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20602-873072/kubeconfig
	I0407 13:30:41.585126 1107590 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-873072/.minikube
	I0407 13:30:41.588117 1107590 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0407 13:30:41.591038 1107590 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 13:30:41.594523 1107590 config.go:182] Loaded profile config "embed-certs-688390": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0407 13:30:41.595104 1107590 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 13:30:41.621022 1107590 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0407 13:30:41.621172 1107590 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 13:30:41.683285 1107590 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:43 OomKillDisable:true NGoroutines:52 SystemTime:2025-04-07 13:30:41.673009013 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Serv
erErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0407 13:30:41.683397 1107590 docker.go:318] overlay module found
	I0407 13:30:41.686547 1107590 out.go:177] * Using the docker driver based on existing profile
	I0407 13:30:41.689509 1107590 start.go:297] selected driver: docker
	I0407 13:30:41.689535 1107590 start.go:901] validating driver "docker" against &{Name:embed-certs-688390 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-688390 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Moun
tString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:30:41.689653 1107590 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 13:30:41.690420 1107590 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 13:30:41.753168 1107590 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:43 OomKillDisable:true NGoroutines:52 SystemTime:2025-04-07 13:30:41.743995038 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Serv
erErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0407 13:30:41.753512 1107590 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 13:30:41.753545 1107590 cni.go:84] Creating CNI manager for ""
	I0407 13:30:41.753606 1107590 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0407 13:30:41.753653 1107590 start.go:340] cluster config:
	{Name:embed-certs-688390 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-688390 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:30:41.758727 1107590 out.go:177] * Starting "embed-certs-688390" primary control-plane node in "embed-certs-688390" cluster
	I0407 13:30:41.761636 1107590 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0407 13:30:41.764759 1107590 out.go:177] * Pulling base image v0.0.46-1743675393-20591 ...
	I0407 13:30:41.767613 1107590 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0407 13:30:41.767682 1107590 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20602-873072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-arm64.tar.lz4
	I0407 13:30:41.767691 1107590 cache.go:56] Caching tarball of preloaded images
	I0407 13:30:41.767734 1107590 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local docker daemon
	I0407 13:30:41.767793 1107590 preload.go:172] Found /home/jenkins/minikube-integration/20602-873072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0407 13:30:41.767803 1107590 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0407 13:30:41.767926 1107590 profile.go:143] Saving config to /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/embed-certs-688390/config.json ...
	I0407 13:30:41.788110 1107590 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local docker daemon, skipping pull
	I0407 13:30:41.788134 1107590 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 exists in daemon, skipping load
	I0407 13:30:41.788152 1107590 cache.go:230] Successfully downloaded all kic artifacts
	I0407 13:30:41.788175 1107590 start.go:360] acquireMachinesLock for embed-certs-688390: {Name:mk224d0616c94c039dbad0154f78977cda80f3b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 13:30:41.788262 1107590 start.go:364] duration metric: took 57.289µs to acquireMachinesLock for "embed-certs-688390"
	I0407 13:30:41.788293 1107590 start.go:96] Skipping create...Using existing machine configuration
	I0407 13:30:41.788351 1107590 fix.go:54] fixHost starting: 
	I0407 13:30:41.788611 1107590 cli_runner.go:164] Run: docker container inspect embed-certs-688390 --format={{.State.Status}}
	I0407 13:30:41.805616 1107590 fix.go:112] recreateIfNeeded on embed-certs-688390: state=Stopped err=<nil>
	W0407 13:30:41.805649 1107590 fix.go:138] unexpected machine state, will restart: <nil>
	I0407 13:30:41.808804 1107590 out.go:177] * Restarting existing docker container for "embed-certs-688390" ...
	I0407 13:30:41.811783 1107590 cli_runner.go:164] Run: docker start embed-certs-688390
	I0407 13:30:42.127773 1107590 cli_runner.go:164] Run: docker container inspect embed-certs-688390 --format={{.State.Status}}
	I0407 13:30:42.157334 1107590 kic.go:430] container "embed-certs-688390" state is running.
	I0407 13:30:42.157838 1107590 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-688390
	I0407 13:30:42.183983 1107590 profile.go:143] Saving config to /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/embed-certs-688390/config.json ...
	I0407 13:30:42.184375 1107590 machine.go:93] provisionDockerMachine start ...
	I0407 13:30:42.184469 1107590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-688390
	I0407 13:30:42.219282 1107590 main.go:141] libmachine: Using SSH client type: native
	I0407 13:30:42.219728 1107590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 34190 <nil> <nil>}
	I0407 13:30:42.219744 1107590 main.go:141] libmachine: About to run SSH command:
	hostname
	I0407 13:30:42.220548 1107590 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35454->127.0.0.1:34190: read: connection reset by peer
	I0407 13:30:45.382821 1107590 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-688390
	
	I0407 13:30:45.382926 1107590 ubuntu.go:169] provisioning hostname "embed-certs-688390"
	I0407 13:30:45.383037 1107590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-688390
	I0407 13:30:45.404547 1107590 main.go:141] libmachine: Using SSH client type: native
	I0407 13:30:45.405146 1107590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 34190 <nil> <nil>}
	I0407 13:30:45.405167 1107590 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-688390 && echo "embed-certs-688390" | sudo tee /etc/hostname
	I0407 13:30:45.547583 1107590 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-688390
	
	I0407 13:30:45.547669 1107590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-688390
	I0407 13:30:45.566078 1107590 main.go:141] libmachine: Using SSH client type: native
	I0407 13:30:45.566411 1107590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 34190 <nil> <nil>}
	I0407 13:30:45.566435 1107590 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-688390' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-688390/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-688390' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 13:30:45.690149 1107590 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 13:30:45.690176 1107590 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20602-873072/.minikube CaCertPath:/home/jenkins/minikube-integration/20602-873072/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20602-873072/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20602-873072/.minikube}
	I0407 13:30:45.690200 1107590 ubuntu.go:177] setting up certificates
	I0407 13:30:45.690210 1107590 provision.go:84] configureAuth start
	I0407 13:30:45.690274 1107590 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-688390
	I0407 13:30:45.709416 1107590 provision.go:143] copyHostCerts
	I0407 13:30:45.709488 1107590 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-873072/.minikube/ca.pem, removing ...
	I0407 13:30:45.709513 1107590 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-873072/.minikube/ca.pem
	I0407 13:30:45.709592 1107590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-873072/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20602-873072/.minikube/ca.pem (1078 bytes)
	I0407 13:30:45.709764 1107590 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-873072/.minikube/cert.pem, removing ...
	I0407 13:30:45.709776 1107590 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-873072/.minikube/cert.pem
	I0407 13:30:45.709813 1107590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-873072/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20602-873072/.minikube/cert.pem (1123 bytes)
	I0407 13:30:45.709892 1107590 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-873072/.minikube/key.pem, removing ...
	I0407 13:30:45.709902 1107590 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-873072/.minikube/key.pem
	I0407 13:30:45.709936 1107590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-873072/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20602-873072/.minikube/key.pem (1675 bytes)
	I0407 13:30:45.710001 1107590 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20602-873072/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20602-873072/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20602-873072/.minikube/certs/ca-key.pem org=jenkins.embed-certs-688390 san=[127.0.0.1 192.168.85.2 embed-certs-688390 localhost minikube]
	I0407 13:30:46.055120 1107590 provision.go:177] copyRemoteCerts
	I0407 13:30:46.055193 1107590 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 13:30:46.055234 1107590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-688390
	I0407 13:30:46.073901 1107590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34190 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/embed-certs-688390/id_rsa Username:docker}
	I0407 13:30:46.162931 1107590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 13:30:46.188052 1107590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0407 13:30:46.214271 1107590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0407 13:30:46.244649 1107590 provision.go:87] duration metric: took 554.420681ms to configureAuth
	I0407 13:30:46.244719 1107590 ubuntu.go:193] setting minikube options for container-runtime
	I0407 13:30:46.244946 1107590 config.go:182] Loaded profile config "embed-certs-688390": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0407 13:30:46.244964 1107590 machine.go:96] duration metric: took 4.06057611s to provisionDockerMachine
	I0407 13:30:46.244974 1107590 start.go:293] postStartSetup for "embed-certs-688390" (driver="docker")
	I0407 13:30:46.244985 1107590 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 13:30:46.245038 1107590 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 13:30:46.245091 1107590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-688390
	I0407 13:30:46.262856 1107590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34190 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/embed-certs-688390/id_rsa Username:docker}
	I0407 13:30:46.359623 1107590 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 13:30:46.363088 1107590 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0407 13:30:46.363126 1107590 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0407 13:30:46.363137 1107590 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0407 13:30:46.363144 1107590 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0407 13:30:46.363154 1107590 filesync.go:126] Scanning /home/jenkins/minikube-integration/20602-873072/.minikube/addons for local assets ...
	I0407 13:30:46.363209 1107590 filesync.go:126] Scanning /home/jenkins/minikube-integration/20602-873072/.minikube/files for local assets ...
	I0407 13:30:46.363294 1107590 filesync.go:149] local asset: /home/jenkins/minikube-integration/20602-873072/.minikube/files/etc/ssl/certs/8785942.pem -> 8785942.pem in /etc/ssl/certs
	I0407 13:30:46.363413 1107590 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0407 13:30:46.372751 1107590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/files/etc/ssl/certs/8785942.pem --> /etc/ssl/certs/8785942.pem (1708 bytes)
	I0407 13:30:46.398518 1107590 start.go:296] duration metric: took 153.528422ms for postStartSetup
	I0407 13:30:46.398657 1107590 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 13:30:46.398707 1107590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-688390
	I0407 13:30:46.416799 1107590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34190 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/embed-certs-688390/id_rsa Username:docker}
	I0407 13:30:46.502710 1107590 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0407 13:30:46.507291 1107590 fix.go:56] duration metric: took 4.718931945s for fixHost
	I0407 13:30:46.507316 1107590 start.go:83] releasing machines lock for "embed-certs-688390", held for 4.719039532s
	I0407 13:30:46.507381 1107590 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-688390
	I0407 13:30:46.525049 1107590 ssh_runner.go:195] Run: cat /version.json
	I0407 13:30:46.525108 1107590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-688390
	I0407 13:30:46.525357 1107590 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0407 13:30:46.525405 1107590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-688390
	I0407 13:30:46.556516 1107590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34190 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/embed-certs-688390/id_rsa Username:docker}
	I0407 13:30:46.559715 1107590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34190 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/embed-certs-688390/id_rsa Username:docker}
	I0407 13:30:46.781232 1107590 ssh_runner.go:195] Run: systemctl --version
	I0407 13:30:46.785892 1107590 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0407 13:30:46.790700 1107590 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0407 13:30:46.809689 1107590 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0407 13:30:46.809788 1107590 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 13:30:46.820522 1107590 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0407 13:30:46.820545 1107590 start.go:495] detecting cgroup driver to use...
	I0407 13:30:46.820578 1107590 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0407 13:30:46.820629 1107590 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0407 13:30:46.838341 1107590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0407 13:30:46.861583 1107590 docker.go:217] disabling cri-docker service (if available) ...
	I0407 13:30:46.861721 1107590 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0407 13:30:46.885229 1107590 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0407 13:30:46.898634 1107590 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0407 13:30:47.026720 1107590 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0407 13:30:47.153939 1107590 docker.go:233] disabling docker service ...
	I0407 13:30:47.154062 1107590 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0407 13:30:47.170436 1107590 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0407 13:30:47.183693 1107590 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0407 13:30:47.315527 1107590 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0407 13:30:47.449767 1107590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0407 13:30:47.463770 1107590 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 13:30:47.486900 1107590 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0407 13:30:47.500570 1107590 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0407 13:30:47.514776 1107590 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0407 13:30:47.514851 1107590 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0407 13:30:47.530042 1107590 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0407 13:30:47.542443 1107590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0407 13:30:47.558973 1107590 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0407 13:30:47.570359 1107590 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 13:30:47.581007 1107590 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0407 13:30:47.592571 1107590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0407 13:30:47.604567 1107590 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0407 13:30:47.616891 1107590 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 13:30:47.627954 1107590 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 13:30:47.638405 1107590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:30:47.764752 1107590 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0407 13:30:48.010970 1107590 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0407 13:30:48.011070 1107590 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0407 13:30:48.016902 1107590 start.go:563] Will wait 60s for crictl version
	I0407 13:30:48.017002 1107590 ssh_runner.go:195] Run: which crictl
	I0407 13:30:48.030582 1107590 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0407 13:30:48.119713 1107590 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0407 13:30:48.119802 1107590 ssh_runner.go:195] Run: containerd --version
	I0407 13:30:48.158606 1107590 ssh_runner.go:195] Run: containerd --version
	I0407 13:30:48.194030 1107590 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.27 ...
	I0407 13:30:48.196953 1107590 cli_runner.go:164] Run: docker network inspect embed-certs-688390 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0407 13:30:48.222137 1107590 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0407 13:30:48.226517 1107590 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:30:48.242012 1107590 kubeadm.go:883] updating cluster {Name:embed-certs-688390 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-688390 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0407 13:30:48.242151 1107590 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0407 13:30:48.242209 1107590 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 13:30:48.294584 1107590 containerd.go:627] all images are preloaded for containerd runtime.
	I0407 13:30:48.294605 1107590 containerd.go:534] Images already preloaded, skipping extraction
	I0407 13:30:48.294670 1107590 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 13:30:48.372573 1107590 containerd.go:627] all images are preloaded for containerd runtime.
	I0407 13:30:48.372598 1107590 cache_images.go:84] Images are preloaded, skipping loading
	I0407 13:30:48.372606 1107590 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.32.2 containerd true true} ...
	I0407 13:30:48.372709 1107590 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-688390 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:embed-certs-688390 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0407 13:30:48.372782 1107590 ssh_runner.go:195] Run: sudo crictl info
	I0407 13:30:48.424826 1107590 cni.go:84] Creating CNI manager for ""
	I0407 13:30:48.424856 1107590 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0407 13:30:48.424867 1107590 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0407 13:30:48.424889 1107590 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-688390 NodeName:embed-certs-688390 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0407 13:30:48.425005 1107590 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-688390"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0407 13:30:48.425090 1107590 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0407 13:30:48.451243 1107590 binaries.go:44] Found k8s binaries, skipping transfer
	I0407 13:30:48.451319 1107590 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0407 13:30:48.467426 1107590 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0407 13:30:48.492121 1107590 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 13:30:48.514477 1107590 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I0407 13:30:48.538408 1107590 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0407 13:30:48.543736 1107590 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:30:48.559395 1107590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:30:48.687669 1107590 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 13:30:48.705818 1107590 certs.go:68] Setting up /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/embed-certs-688390 for IP: 192.168.85.2
	I0407 13:30:48.705889 1107590 certs.go:194] generating shared ca certs ...
	I0407 13:30:48.705920 1107590 certs.go:226] acquiring lock for ca certs: {Name:mk03094d90434f2a42c24ebaddfee021594c5911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:30:48.706080 1107590 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20602-873072/.minikube/ca.key
	I0407 13:30:48.706168 1107590 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20602-873072/.minikube/proxy-client-ca.key
	I0407 13:30:48.706193 1107590 certs.go:256] generating profile certs ...
	I0407 13:30:48.706312 1107590 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/embed-certs-688390/client.key
	I0407 13:30:48.706432 1107590 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/embed-certs-688390/apiserver.key.bc2ed1e9
	I0407 13:30:48.706521 1107590 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/embed-certs-688390/proxy-client.key
	I0407 13:30:48.706662 1107590 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-873072/.minikube/certs/878594.pem (1338 bytes)
	W0407 13:30:48.706735 1107590 certs.go:480] ignoring /home/jenkins/minikube-integration/20602-873072/.minikube/certs/878594_empty.pem, impossibly tiny 0 bytes
	I0407 13:30:48.706762 1107590 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-873072/.minikube/certs/ca-key.pem (1675 bytes)
	I0407 13:30:48.706816 1107590 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-873072/.minikube/certs/ca.pem (1078 bytes)
	I0407 13:30:48.706860 1107590 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-873072/.minikube/certs/cert.pem (1123 bytes)
	I0407 13:30:48.706913 1107590 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-873072/.minikube/certs/key.pem (1675 bytes)
	I0407 13:30:48.706981 1107590 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-873072/.minikube/files/etc/ssl/certs/8785942.pem (1708 bytes)
	I0407 13:30:48.707616 1107590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 13:30:48.774365 1107590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0407 13:30:48.826798 1107590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 13:30:48.868159 1107590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0407 13:30:48.946198 1107590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/embed-certs-688390/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0407 13:30:49.015151 1107590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/embed-certs-688390/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0407 13:30:49.083090 1107590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/embed-certs-688390/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 13:30:49.137662 1107590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/embed-certs-688390/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0407 13:30:49.172336 1107590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/files/etc/ssl/certs/8785942.pem --> /usr/share/ca-certificates/8785942.pem (1708 bytes)
	I0407 13:30:49.204223 1107590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 13:30:49.236211 1107590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/certs/878594.pem --> /usr/share/ca-certificates/878594.pem (1338 bytes)
	I0407 13:30:49.262654 1107590 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0407 13:30:49.282028 1107590 ssh_runner.go:195] Run: openssl version
	I0407 13:30:49.288180 1107590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8785942.pem && ln -fs /usr/share/ca-certificates/8785942.pem /etc/ssl/certs/8785942.pem"
	I0407 13:30:49.298535 1107590 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8785942.pem
	I0407 13:30:49.302471 1107590 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  7 12:44 /usr/share/ca-certificates/8785942.pem
	I0407 13:30:49.302595 1107590 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8785942.pem
	I0407 13:30:49.310206 1107590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8785942.pem /etc/ssl/certs/3ec20f2e.0"
	I0407 13:30:49.320210 1107590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 13:30:49.330434 1107590 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:30:49.334347 1107590 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:37 /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:30:49.334432 1107590 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:30:49.342273 1107590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 13:30:49.351924 1107590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/878594.pem && ln -fs /usr/share/ca-certificates/878594.pem /etc/ssl/certs/878594.pem"
	I0407 13:30:49.362078 1107590 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/878594.pem
	I0407 13:30:49.365810 1107590 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  7 12:44 /usr/share/ca-certificates/878594.pem
	I0407 13:30:49.365920 1107590 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/878594.pem
	I0407 13:30:49.373219 1107590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/878594.pem /etc/ssl/certs/51391683.0"
	I0407 13:30:49.382767 1107590 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 13:30:49.386569 1107590 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0407 13:30:49.394772 1107590 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0407 13:30:49.402041 1107590 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0407 13:30:49.409140 1107590 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0407 13:30:49.417225 1107590 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0407 13:30:49.426547 1107590 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0407 13:30:49.434287 1107590 kubeadm.go:392] StartCluster: {Name:embed-certs-688390 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-688390 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkin
s:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:30:49.434430 1107590 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0407 13:30:49.434498 1107590 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0407 13:30:49.492549 1107590 cri.go:89] found id: "8667721e19675d05e2d11ed1e8ec92d4fb1005b2c5d6fb55a214d0dcf5a81a5e"
	I0407 13:30:49.492575 1107590 cri.go:89] found id: "76558dbc4781d47881c50d48dd6bb28d54a860e776f29e297c8a31f9fe9cb90a"
	I0407 13:30:49.492580 1107590 cri.go:89] found id: "7c7b4a0911eba6af046b71620c7c34ee6045e6dac2e779f492e07ee08922bac7"
	I0407 13:30:49.492584 1107590 cri.go:89] found id: "6e5e7a1630068454fed3bd5b4b9ccd1c7c9d04dad311e54b96273ed75f41ece6"
	I0407 13:30:49.492588 1107590 cri.go:89] found id: "dcef37010c5408321cc328b9c8c7066cd42c0f6012ecebef69dff33c682efaeb"
	I0407 13:30:49.492594 1107590 cri.go:89] found id: "ae9061ad7363ded84522777bef558bcc6facfc004b1953a0a52a987f2585ca5c"
	I0407 13:30:49.492598 1107590 cri.go:89] found id: "503be695ecd9c154b0b6ea612b87be1113c54984dea50c2eb301b5baffe211b7"
	I0407 13:30:49.492602 1107590 cri.go:89] found id: "8a59a835709e5467410288c94735b586d8c71c63d83f5912eef5b0f36f403634"
	I0407 13:30:49.492605 1107590 cri.go:89] found id: ""
	I0407 13:30:49.492661 1107590 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0407 13:30:49.512222 1107590 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-04-07T13:30:49Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0407 13:30:49.512385 1107590 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0407 13:30:49.531140 1107590 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0407 13:30:49.531212 1107590 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0407 13:30:49.531315 1107590 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0407 13:30:49.542867 1107590 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0407 13:30:49.543628 1107590 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-688390" does not appear in /home/jenkins/minikube-integration/20602-873072/kubeconfig
	I0407 13:30:49.544016 1107590 kubeconfig.go:62] /home/jenkins/minikube-integration/20602-873072/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-688390" cluster setting kubeconfig missing "embed-certs-688390" context setting]
	I0407 13:30:49.544606 1107590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-873072/kubeconfig: {Name:mk9de2da01a51fd73232a20700f86bdc259a91ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:30:49.546555 1107590 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0407 13:30:49.573879 1107590 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I0407 13:30:49.573980 1107590 kubeadm.go:597] duration metric: took 42.747109ms to restartPrimaryControlPlane
	I0407 13:30:49.574007 1107590 kubeadm.go:394] duration metric: took 139.728981ms to StartCluster
	I0407 13:30:49.574036 1107590 settings.go:142] acquiring lock: {Name:mk3e960f3698515246acbd5cb37ff276e0a43a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:30:49.574142 1107590 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20602-873072/kubeconfig
	I0407 13:30:49.584958 1107590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-873072/kubeconfig: {Name:mk9de2da01a51fd73232a20700f86bdc259a91ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:30:49.585551 1107590 config.go:182] Loaded profile config "embed-certs-688390": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0407 13:30:49.585332 1107590 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0407 13:30:49.585688 1107590 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0407 13:30:49.587136 1107590 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-688390"
	I0407 13:30:49.587180 1107590 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-688390"
	W0407 13:30:49.587217 1107590 addons.go:247] addon storage-provisioner should already be in state true
	I0407 13:30:49.587267 1107590 host.go:66] Checking if "embed-certs-688390" exists ...
	I0407 13:30:49.587835 1107590 cli_runner.go:164] Run: docker container inspect embed-certs-688390 --format={{.State.Status}}
	I0407 13:30:49.588054 1107590 addons.go:69] Setting default-storageclass=true in profile "embed-certs-688390"
	I0407 13:30:49.588095 1107590 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-688390"
	I0407 13:30:49.588289 1107590 addons.go:69] Setting metrics-server=true in profile "embed-certs-688390"
	I0407 13:30:49.588302 1107590 addons.go:238] Setting addon metrics-server=true in "embed-certs-688390"
	W0407 13:30:49.588309 1107590 addons.go:247] addon metrics-server should already be in state true
	I0407 13:30:49.588328 1107590 host.go:66] Checking if "embed-certs-688390" exists ...
	I0407 13:30:49.588720 1107590 cli_runner.go:164] Run: docker container inspect embed-certs-688390 --format={{.State.Status}}
	I0407 13:30:49.589636 1107590 cli_runner.go:164] Run: docker container inspect embed-certs-688390 --format={{.State.Status}}
	I0407 13:30:49.595502 1107590 out.go:177] * Verifying Kubernetes components...
	I0407 13:30:49.589879 1107590 addons.go:69] Setting dashboard=true in profile "embed-certs-688390"
	I0407 13:30:49.595937 1107590 addons.go:238] Setting addon dashboard=true in "embed-certs-688390"
	W0407 13:30:49.595949 1107590 addons.go:247] addon dashboard should already be in state true
	I0407 13:30:49.595990 1107590 host.go:66] Checking if "embed-certs-688390" exists ...
	I0407 13:30:49.596435 1107590 cli_runner.go:164] Run: docker container inspect embed-certs-688390 --format={{.State.Status}}
	I0407 13:30:49.599432 1107590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:30:49.669943 1107590 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0407 13:30:49.670067 1107590 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 13:30:49.672814 1107590 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0407 13:30:49.672854 1107590 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0407 13:30:49.672929 1107590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-688390
	I0407 13:30:49.673234 1107590 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 13:30:49.673245 1107590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0407 13:30:49.673286 1107590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-688390
	I0407 13:30:49.702543 1107590 addons.go:238] Setting addon default-storageclass=true in "embed-certs-688390"
	W0407 13:30:49.702569 1107590 addons.go:247] addon default-storageclass should already be in state true
	I0407 13:30:49.702594 1107590 host.go:66] Checking if "embed-certs-688390" exists ...
	I0407 13:30:49.703039 1107590 cli_runner.go:164] Run: docker container inspect embed-certs-688390 --format={{.State.Status}}
	I0407 13:30:49.705457 1107590 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0407 13:30:49.716630 1107590 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0407 13:30:46.842901 1095137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:30:46.870854 1095137 api_server.go:72] duration metric: took 5m51.753710743s to wait for apiserver process to appear ...
	I0407 13:30:46.870880 1095137 api_server.go:88] waiting for apiserver healthz status ...
	I0407 13:30:46.870915 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:30:46.870969 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:30:46.986233 1095137 cri.go:89] found id: "a1ef4f8376e9fc950587e9d4daa95427d19c48e4606813b7ce87102e58fd46af"
	I0407 13:30:46.986252 1095137 cri.go:89] found id: "d89f223f22b8606c12ec0e778fa21d2abe90349e214332d4a1fa32b45f514d9b"
	I0407 13:30:46.986257 1095137 cri.go:89] found id: ""
	I0407 13:30:46.986264 1095137 logs.go:282] 2 containers: [a1ef4f8376e9fc950587e9d4daa95427d19c48e4606813b7ce87102e58fd46af d89f223f22b8606c12ec0e778fa21d2abe90349e214332d4a1fa32b45f514d9b]
	I0407 13:30:46.986340 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:46.990308 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:46.993838 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0407 13:30:46.993911 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:30:47.052280 1095137 cri.go:89] found id: "5864d99cdd47d055aee6b1526bb1c6a1ee5862850613e2d63e0d93e141c7f4d5"
	I0407 13:30:47.052300 1095137 cri.go:89] found id: "ad3658b16b26420a5c8d95f062de8e38e1c9d8d5235a77bf2a50e7228aabb735"
	I0407 13:30:47.052305 1095137 cri.go:89] found id: ""
	I0407 13:30:47.052313 1095137 logs.go:282] 2 containers: [5864d99cdd47d055aee6b1526bb1c6a1ee5862850613e2d63e0d93e141c7f4d5 ad3658b16b26420a5c8d95f062de8e38e1c9d8d5235a77bf2a50e7228aabb735]
	I0407 13:30:47.052369 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:47.056223 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:47.059720 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0407 13:30:47.059794 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:30:47.130170 1095137 cri.go:89] found id: "051642364b305742215427b11bf59d630eb773d7d2d8489872ccb8bafb97a33a"
	I0407 13:30:47.130191 1095137 cri.go:89] found id: "e8606d211bc70838dc98631de3d337f0689089d54c4dd0963c66fed159c72bce"
	I0407 13:30:47.130196 1095137 cri.go:89] found id: ""
	I0407 13:30:47.130204 1095137 logs.go:282] 2 containers: [051642364b305742215427b11bf59d630eb773d7d2d8489872ccb8bafb97a33a e8606d211bc70838dc98631de3d337f0689089d54c4dd0963c66fed159c72bce]
	I0407 13:30:47.130261 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:47.134245 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:47.143189 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:30:47.143271 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:30:47.202603 1095137 cri.go:89] found id: "d6760ad08162f8ec381c7feef6ca753faf89ef2bb6b6aaf4ad3420559e5f73e7"
	I0407 13:30:47.202625 1095137 cri.go:89] found id: "c6f6d481b0f4c36b1a380741fe1a1dc69435fb844001d3a69feaa1fddcdd4030"
	I0407 13:30:47.202630 1095137 cri.go:89] found id: ""
	I0407 13:30:47.202637 1095137 logs.go:282] 2 containers: [d6760ad08162f8ec381c7feef6ca753faf89ef2bb6b6aaf4ad3420559e5f73e7 c6f6d481b0f4c36b1a380741fe1a1dc69435fb844001d3a69feaa1fddcdd4030]
	I0407 13:30:47.202699 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:47.206762 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:47.210646 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:30:47.210745 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:30:47.284058 1095137 cri.go:89] found id: "74af9023b4fde2bd349b21c976719ba85dda1ea03171badab844478b8cbf2088"
	I0407 13:30:47.284131 1095137 cri.go:89] found id: "77f2619b2d1aad99f71633fed74393b3b19f570432847d24bea319ce3f3cc54b"
	I0407 13:30:47.284150 1095137 cri.go:89] found id: ""
	I0407 13:30:47.284173 1095137 logs.go:282] 2 containers: [74af9023b4fde2bd349b21c976719ba85dda1ea03171badab844478b8cbf2088 77f2619b2d1aad99f71633fed74393b3b19f570432847d24bea319ce3f3cc54b]
	I0407 13:30:47.284264 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:47.290441 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:47.294067 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:30:47.294179 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:30:47.342560 1095137 cri.go:89] found id: "04c3bb1dfe7c8d458ce5ec56757e932f8a599e0d467221dd4b85ef6a3fefa6df"
	I0407 13:30:47.342628 1095137 cri.go:89] found id: "2b349ae2ec417a9d000e61e8c66afdcccb2d202d992eec33826df20e410f87a2"
	I0407 13:30:47.342646 1095137 cri.go:89] found id: ""
	I0407 13:30:47.342669 1095137 logs.go:282] 2 containers: [04c3bb1dfe7c8d458ce5ec56757e932f8a599e0d467221dd4b85ef6a3fefa6df 2b349ae2ec417a9d000e61e8c66afdcccb2d202d992eec33826df20e410f87a2]
	I0407 13:30:47.342765 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:47.346752 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:47.351671 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0407 13:30:47.351794 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:30:47.412231 1095137 cri.go:89] found id: "e4568189822d430c58ad9f0a391ab15cbb27395e1408248c2c03f19d5dc9150b"
	I0407 13:30:47.412307 1095137 cri.go:89] found id: "b2ada56c528ba0083dc91efb8ea35b5b75cc614fa96a6cd93819b45a72ee05fb"
	I0407 13:30:47.412328 1095137 cri.go:89] found id: ""
	I0407 13:30:47.412350 1095137 logs.go:282] 2 containers: [e4568189822d430c58ad9f0a391ab15cbb27395e1408248c2c03f19d5dc9150b b2ada56c528ba0083dc91efb8ea35b5b75cc614fa96a6cd93819b45a72ee05fb]
	I0407 13:30:47.412437 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:47.416534 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:47.420684 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:30:47.420804 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:30:47.473376 1095137 cri.go:89] found id: "3e668dbcb4dd3fef6e58de6d11f5522dbfad76947bfe7bab5fda3d0228b2f625"
	I0407 13:30:47.473454 1095137 cri.go:89] found id: ""
	I0407 13:30:47.473475 1095137 logs.go:282] 1 containers: [3e668dbcb4dd3fef6e58de6d11f5522dbfad76947bfe7bab5fda3d0228b2f625]
	I0407 13:30:47.473560 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:47.477965 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0407 13:30:47.478087 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0407 13:30:47.526054 1095137 cri.go:89] found id: "2c994f7c46244b75a5d9fce69b3c3f15c9a2ce0a534a675952a6506d4c18ba61"
	I0407 13:30:47.526129 1095137 cri.go:89] found id: "d05c978cfa5a3d8c7465b05f0087ddff9b59b6291d3e17156779dbb770748849"
	I0407 13:30:47.526148 1095137 cri.go:89] found id: ""
	I0407 13:30:47.526170 1095137 logs.go:282] 2 containers: [2c994f7c46244b75a5d9fce69b3c3f15c9a2ce0a534a675952a6506d4c18ba61 d05c978cfa5a3d8c7465b05f0087ddff9b59b6291d3e17156779dbb770748849]
	I0407 13:30:47.526254 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:47.531086 1095137 ssh_runner.go:195] Run: which crictl
	I0407 13:30:47.534990 1095137 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:30:47.535062 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0407 13:30:47.736664 1095137 logs.go:123] Gathering logs for coredns [e8606d211bc70838dc98631de3d337f0689089d54c4dd0963c66fed159c72bce] ...
	I0407 13:30:47.736704 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8606d211bc70838dc98631de3d337f0689089d54c4dd0963c66fed159c72bce"
	I0407 13:30:47.789228 1095137 logs.go:123] Gathering logs for kubernetes-dashboard [3e668dbcb4dd3fef6e58de6d11f5522dbfad76947bfe7bab5fda3d0228b2f625] ...
	I0407 13:30:47.789262 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e668dbcb4dd3fef6e58de6d11f5522dbfad76947bfe7bab5fda3d0228b2f625"
	I0407 13:30:47.866453 1095137 logs.go:123] Gathering logs for storage-provisioner [d05c978cfa5a3d8c7465b05f0087ddff9b59b6291d3e17156779dbb770748849] ...
	I0407 13:30:47.866486 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d05c978cfa5a3d8c7465b05f0087ddff9b59b6291d3e17156779dbb770748849"
	I0407 13:30:47.912587 1095137 logs.go:123] Gathering logs for container status ...
	I0407 13:30:47.912618 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:30:47.991125 1095137 logs.go:123] Gathering logs for kubelet ...
	I0407 13:30:47.991154 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0407 13:30:48.065207 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:14 old-k8s-version-856421 kubelet[667]: E0407 13:25:14.307390     667 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-856421" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-856421' and this object
	W0407 13:30:48.065569 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:14 old-k8s-version-856421 kubelet[667]: E0407 13:25:14.307927     667 reflector.go:138] object-"kube-system"/"kube-proxy-token-j6crq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-j6crq" is forbidden: User "system:node:old-k8s-version-856421" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-856421' and this object
	W0407 13:30:48.065836 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:14 old-k8s-version-856421 kubelet[667]: E0407 13:25:14.308116     667 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-856421" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-856421' and this object
	W0407 13:30:48.066068 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:14 old-k8s-version-856421 kubelet[667]: E0407 13:25:14.308292     667 reflector.go:138] object-"kube-system"/"storage-provisioner-token-nvxlj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-nvxlj" is forbidden: User "system:node:old-k8s-version-856421" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-856421' and this object
	W0407 13:30:48.066279 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:14 old-k8s-version-856421 kubelet[667]: E0407 13:25:14.308445     667 reflector.go:138] object-"default"/"default-token-znh7g": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-znh7g" is forbidden: User "system:node:old-k8s-version-856421" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-856421' and this object
	W0407 13:30:48.066499 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:14 old-k8s-version-856421 kubelet[667]: E0407 13:25:14.308590     667 reflector.go:138] object-"kube-system"/"kindnet-token-fxnc5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-fxnc5" is forbidden: User "system:node:old-k8s-version-856421" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-856421' and this object
	W0407 13:30:48.066775 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:14 old-k8s-version-856421 kubelet[667]: E0407 13:25:14.308753     667 reflector.go:138] object-"kube-system"/"coredns-token-sjxkg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-sjxkg" is forbidden: User "system:node:old-k8s-version-856421" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-856421' and this object
	W0407 13:30:48.072894 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:15 old-k8s-version-856421 kubelet[667]: E0407 13:25:15.094522     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0407 13:30:48.078579 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:16 old-k8s-version-856421 kubelet[667]: E0407 13:25:16.056738     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.082673 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:27 old-k8s-version-856421 kubelet[667]: E0407 13:25:27.890508     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0407 13:30:48.084370 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:40 old-k8s-version-856421 kubelet[667]: E0407 13:25:40.901804     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.084966 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:42 old-k8s-version-856421 kubelet[667]: E0407 13:25:42.190528     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.085629 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:43 old-k8s-version-856421 kubelet[667]: E0407 13:25:43.194128     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.086140 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:47 old-k8s-version-856421 kubelet[667]: E0407 13:25:47.208836     667 pod_workers.go:191] Error syncing pod ffa09209-8141-4692-8b43-e212485a4adb ("storage-provisioner_kube-system(ffa09209-8141-4692-8b43-e212485a4adb)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ffa09209-8141-4692-8b43-e212485a4adb)"
	W0407 13:30:48.086480 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:49 old-k8s-version-856421 kubelet[667]: E0407 13:25:49.601173     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.089328 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:55 old-k8s-version-856421 kubelet[667]: E0407 13:25:55.894550     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0407 13:30:48.090071 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:02 old-k8s-version-856421 kubelet[667]: E0407 13:26:02.259589     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.090263 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:07 old-k8s-version-856421 kubelet[667]: E0407 13:26:07.882067     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.090596 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:09 old-k8s-version-856421 kubelet[667]: E0407 13:26:09.601119     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.090781 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:21 old-k8s-version-856421 kubelet[667]: E0407 13:26:21.882035     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.091367 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:23 old-k8s-version-856421 kubelet[667]: E0407 13:26:23.333979     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.091693 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:29 old-k8s-version-856421 kubelet[667]: E0407 13:26:29.601138     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.091878 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:33 old-k8s-version-856421 kubelet[667]: E0407 13:26:33.882060     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.092204 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:42 old-k8s-version-856421 kubelet[667]: E0407 13:26:42.882285     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.094653 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:46 old-k8s-version-856421 kubelet[667]: E0407 13:26:46.916880     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0407 13:30:48.094984 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:53 old-k8s-version-856421 kubelet[667]: E0407 13:26:53.881641     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.095172 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:58 old-k8s-version-856421 kubelet[667]: E0407 13:26:58.887165     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.095764 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:05 old-k8s-version-856421 kubelet[667]: E0407 13:27:05.451459     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.096091 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:09 old-k8s-version-856421 kubelet[667]: E0407 13:27:09.601083     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.096275 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:11 old-k8s-version-856421 kubelet[667]: E0407 13:27:11.882020     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.096603 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:22 old-k8s-version-856421 kubelet[667]: E0407 13:27:22.882870     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.096788 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:24 old-k8s-version-856421 kubelet[667]: E0407 13:27:24.883611     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.097167 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:33 old-k8s-version-856421 kubelet[667]: E0407 13:27:33.881645     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.097363 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:36 old-k8s-version-856421 kubelet[667]: E0407 13:27:36.883495     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.097692 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:46 old-k8s-version-856421 kubelet[667]: E0407 13:27:46.882237     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.097891 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:49 old-k8s-version-856421 kubelet[667]: E0407 13:27:49.882049     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.098077 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:00 old-k8s-version-856421 kubelet[667]: E0407 13:28:00.882106     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.098408 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:01 old-k8s-version-856421 kubelet[667]: E0407 13:28:01.881859     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.098735 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:13 old-k8s-version-856421 kubelet[667]: E0407 13:28:13.882356     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.101174 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:15 old-k8s-version-856421 kubelet[667]: E0407 13:28:15.895177     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0407 13:30:48.101500 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:24 old-k8s-version-856421 kubelet[667]: E0407 13:28:24.882233     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.101683 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:27 old-k8s-version-856421 kubelet[667]: E0407 13:28:27.882283     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.102314 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:36 old-k8s-version-856421 kubelet[667]: E0407 13:28:36.680281     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.102503 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:38 old-k8s-version-856421 kubelet[667]: E0407 13:28:38.882208     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.102831 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:39 old-k8s-version-856421 kubelet[667]: E0407 13:28:39.601171     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.103015 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:50 old-k8s-version-856421 kubelet[667]: E0407 13:28:50.882465     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.103343 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:52 old-k8s-version-856421 kubelet[667]: E0407 13:28:52.882220     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.103529 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:01 old-k8s-version-856421 kubelet[667]: E0407 13:29:01.882101     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.103856 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:04 old-k8s-version-856421 kubelet[667]: E0407 13:29:04.885771     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.104040 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:15 old-k8s-version-856421 kubelet[667]: E0407 13:29:15.882105     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.104366 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:19 old-k8s-version-856421 kubelet[667]: E0407 13:29:19.881643     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.104552 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:28 old-k8s-version-856421 kubelet[667]: E0407 13:29:28.884253     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.105009 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:32 old-k8s-version-856421 kubelet[667]: E0407 13:29:32.882068     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.105201 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:39 old-k8s-version-856421 kubelet[667]: E0407 13:29:39.883031     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.105541 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:47 old-k8s-version-856421 kubelet[667]: E0407 13:29:47.882527     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.105739 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:50 old-k8s-version-856421 kubelet[667]: E0407 13:29:50.882583     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.106067 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:59 old-k8s-version-856421 kubelet[667]: E0407 13:29:59.882436     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.106251 1095137 logs.go:138] Found kubelet problem: Apr 07 13:30:01 old-k8s-version-856421 kubelet[667]: E0407 13:30:01.885267     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.106586 1095137 logs.go:138] Found kubelet problem: Apr 07 13:30:10 old-k8s-version-856421 kubelet[667]: E0407 13:30:10.882871     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.106770 1095137 logs.go:138] Found kubelet problem: Apr 07 13:30:15 old-k8s-version-856421 kubelet[667]: E0407 13:30:15.882153     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.107101 1095137 logs.go:138] Found kubelet problem: Apr 07 13:30:22 old-k8s-version-856421 kubelet[667]: E0407 13:30:22.882311     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.107285 1095137 logs.go:138] Found kubelet problem: Apr 07 13:30:28 old-k8s-version-856421 kubelet[667]: E0407 13:30:28.882142     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:48.107610 1095137 logs.go:138] Found kubelet problem: Apr 07 13:30:35 old-k8s-version-856421 kubelet[667]: E0407 13:30:35.885355     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:48.107794 1095137 logs.go:138] Found kubelet problem: Apr 07 13:30:41 old-k8s-version-856421 kubelet[667]: E0407 13:30:41.882269     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0407 13:30:48.107807 1095137 logs.go:123] Gathering logs for kube-proxy [77f2619b2d1aad99f71633fed74393b3b19f570432847d24bea319ce3f3cc54b] ...
	I0407 13:30:48.107822 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77f2619b2d1aad99f71633fed74393b3b19f570432847d24bea319ce3f3cc54b"
	I0407 13:30:48.156575 1095137 logs.go:123] Gathering logs for kindnet [e4568189822d430c58ad9f0a391ab15cbb27395e1408248c2c03f19d5dc9150b] ...
	I0407 13:30:48.156606 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e4568189822d430c58ad9f0a391ab15cbb27395e1408248c2c03f19d5dc9150b"
	I0407 13:30:48.232444 1095137 logs.go:123] Gathering logs for kindnet [b2ada56c528ba0083dc91efb8ea35b5b75cc614fa96a6cd93819b45a72ee05fb] ...
	I0407 13:30:48.232472 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b2ada56c528ba0083dc91efb8ea35b5b75cc614fa96a6cd93819b45a72ee05fb"
	I0407 13:30:48.305914 1095137 logs.go:123] Gathering logs for kube-apiserver [a1ef4f8376e9fc950587e9d4daa95427d19c48e4606813b7ce87102e58fd46af] ...
	I0407 13:30:48.305993 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1ef4f8376e9fc950587e9d4daa95427d19c48e4606813b7ce87102e58fd46af"
	I0407 13:30:48.379011 1095137 logs.go:123] Gathering logs for kube-apiserver [d89f223f22b8606c12ec0e778fa21d2abe90349e214332d4a1fa32b45f514d9b] ...
	I0407 13:30:48.379086 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d89f223f22b8606c12ec0e778fa21d2abe90349e214332d4a1fa32b45f514d9b"
	I0407 13:30:48.462552 1095137 logs.go:123] Gathering logs for etcd [5864d99cdd47d055aee6b1526bb1c6a1ee5862850613e2d63e0d93e141c7f4d5] ...
	I0407 13:30:48.462584 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5864d99cdd47d055aee6b1526bb1c6a1ee5862850613e2d63e0d93e141c7f4d5"
	I0407 13:30:48.528785 1095137 logs.go:123] Gathering logs for kube-scheduler [c6f6d481b0f4c36b1a380741fe1a1dc69435fb844001d3a69feaa1fddcdd4030] ...
	I0407 13:30:48.528974 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6f6d481b0f4c36b1a380741fe1a1dc69435fb844001d3a69feaa1fddcdd4030"
	I0407 13:30:48.589264 1095137 logs.go:123] Gathering logs for kube-controller-manager [04c3bb1dfe7c8d458ce5ec56757e932f8a599e0d467221dd4b85ef6a3fefa6df] ...
	I0407 13:30:48.589336 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04c3bb1dfe7c8d458ce5ec56757e932f8a599e0d467221dd4b85ef6a3fefa6df"
	I0407 13:30:48.680565 1095137 logs.go:123] Gathering logs for kube-controller-manager [2b349ae2ec417a9d000e61e8c66afdcccb2d202d992eec33826df20e410f87a2] ...
	I0407 13:30:48.680604 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b349ae2ec417a9d000e61e8c66afdcccb2d202d992eec33826df20e410f87a2"
	I0407 13:30:48.779599 1095137 logs.go:123] Gathering logs for storage-provisioner [2c994f7c46244b75a5d9fce69b3c3f15c9a2ce0a534a675952a6506d4c18ba61] ...
	I0407 13:30:48.779675 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c994f7c46244b75a5d9fce69b3c3f15c9a2ce0a534a675952a6506d4c18ba61"
	I0407 13:30:48.835392 1095137 logs.go:123] Gathering logs for containerd ...
	I0407 13:30:48.835418 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0407 13:30:48.929349 1095137 logs.go:123] Gathering logs for dmesg ...
	I0407 13:30:48.929382 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:30:48.954396 1095137 logs.go:123] Gathering logs for etcd [ad3658b16b26420a5c8d95f062de8e38e1c9d8d5235a77bf2a50e7228aabb735] ...
	I0407 13:30:48.954423 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad3658b16b26420a5c8d95f062de8e38e1c9d8d5235a77bf2a50e7228aabb735"
	I0407 13:30:49.030928 1095137 logs.go:123] Gathering logs for coredns [051642364b305742215427b11bf59d630eb773d7d2d8489872ccb8bafb97a33a] ...
	I0407 13:30:49.031024 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 051642364b305742215427b11bf59d630eb773d7d2d8489872ccb8bafb97a33a"
	I0407 13:30:49.110624 1095137 logs.go:123] Gathering logs for kube-scheduler [d6760ad08162f8ec381c7feef6ca753faf89ef2bb6b6aaf4ad3420559e5f73e7] ...
	I0407 13:30:49.110700 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6760ad08162f8ec381c7feef6ca753faf89ef2bb6b6aaf4ad3420559e5f73e7"
	I0407 13:30:49.161794 1095137 logs.go:123] Gathering logs for kube-proxy [74af9023b4fde2bd349b21c976719ba85dda1ea03171badab844478b8cbf2088] ...
	I0407 13:30:49.161888 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74af9023b4fde2bd349b21c976719ba85dda1ea03171badab844478b8cbf2088"
	I0407 13:30:49.226058 1095137 out.go:358] Setting ErrFile to fd 2...
	I0407 13:30:49.226135 1095137 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0407 13:30:49.226216 1095137 out.go:270] X Problems detected in kubelet:
	W0407 13:30:49.226386 1095137 out.go:270]   Apr 07 13:30:15 old-k8s-version-856421 kubelet[667]: E0407 13:30:15.882153     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:49.226426 1095137 out.go:270]   Apr 07 13:30:22 old-k8s-version-856421 kubelet[667]: E0407 13:30:22.882311     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:49.226481 1095137 out.go:270]   Apr 07 13:30:28 old-k8s-version-856421 kubelet[667]: E0407 13:30:28.882142     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:30:49.226514 1095137 out.go:270]   Apr 07 13:30:35 old-k8s-version-856421 kubelet[667]: E0407 13:30:35.885355     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	W0407 13:30:49.226557 1095137 out.go:270]   Apr 07 13:30:41 old-k8s-version-856421 kubelet[667]: E0407 13:30:41.882269     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0407 13:30:49.226602 1095137 out.go:358] Setting ErrFile to fd 2...
	I0407 13:30:49.226640 1095137 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:30:49.719980 1107590 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0407 13:30:49.720008 1107590 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0407 13:30:49.720101 1107590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-688390
	I0407 13:30:49.743258 1107590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34190 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/embed-certs-688390/id_rsa Username:docker}
	I0407 13:30:49.748723 1107590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34190 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/embed-certs-688390/id_rsa Username:docker}
	I0407 13:30:49.782080 1107590 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0407 13:30:49.782100 1107590 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0407 13:30:49.782164 1107590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-688390
	I0407 13:30:49.801166 1107590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34190 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/embed-certs-688390/id_rsa Username:docker}
	I0407 13:30:49.814700 1107590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34190 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/embed-certs-688390/id_rsa Username:docker}
	I0407 13:30:49.885594 1107590 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 13:30:49.953383 1107590 node_ready.go:35] waiting up to 6m0s for node "embed-certs-688390" to be "Ready" ...
	I0407 13:30:50.017289 1107590 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0407 13:30:50.017374 1107590 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0407 13:30:50.141489 1107590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0407 13:30:50.148337 1107590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 13:30:50.152555 1107590 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0407 13:30:50.152628 1107590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0407 13:30:50.189138 1107590 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0407 13:30:50.189223 1107590 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0407 13:30:50.235425 1107590 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0407 13:30:50.235496 1107590 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0407 13:30:50.323856 1107590 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0407 13:30:50.323929 1107590 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0407 13:30:50.467515 1107590 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0407 13:30:50.467589 1107590 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0407 13:30:50.478779 1107590 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0407 13:30:50.478851 1107590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0407 13:30:50.615259 1107590 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0407 13:30:50.615331 1107590 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0407 13:30:50.660938 1107590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0407 13:30:50.725537 1107590 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0407 13:30:50.725613 1107590 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0407 13:30:50.834421 1107590 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0407 13:30:50.834499 1107590 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0407 13:30:50.979246 1107590 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0407 13:30:50.979320 1107590 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0407 13:30:51.063504 1107590 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0407 13:30:51.063579 1107590 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0407 13:30:51.106347 1107590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0407 13:30:54.638260 1107590 node_ready.go:49] node "embed-certs-688390" has status "Ready":"True"
	I0407 13:30:54.638347 1107590 node_ready.go:38] duration metric: took 4.684914251s for node "embed-certs-688390" to be "Ready" ...
	I0407 13:30:54.638376 1107590 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 13:30:54.690506 1107590 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-855lf" in "kube-system" namespace to be "Ready" ...
	I0407 13:30:54.722772 1107590 pod_ready.go:93] pod "coredns-668d6bf9bc-855lf" in "kube-system" namespace has status "Ready":"True"
	I0407 13:30:54.722855 1107590 pod_ready.go:82] duration metric: took 32.270521ms for pod "coredns-668d6bf9bc-855lf" in "kube-system" namespace to be "Ready" ...
	I0407 13:30:54.722883 1107590 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-688390" in "kube-system" namespace to be "Ready" ...
	I0407 13:30:54.737677 1107590 pod_ready.go:93] pod "etcd-embed-certs-688390" in "kube-system" namespace has status "Ready":"True"
	I0407 13:30:54.737712 1107590 pod_ready.go:82] duration metric: took 14.8067ms for pod "etcd-embed-certs-688390" in "kube-system" namespace to be "Ready" ...
	I0407 13:30:54.737729 1107590 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-688390" in "kube-system" namespace to be "Ready" ...
	I0407 13:30:54.757757 1107590 pod_ready.go:93] pod "kube-apiserver-embed-certs-688390" in "kube-system" namespace has status "Ready":"True"
	I0407 13:30:54.757782 1107590 pod_ready.go:82] duration metric: took 20.045109ms for pod "kube-apiserver-embed-certs-688390" in "kube-system" namespace to be "Ready" ...
	I0407 13:30:54.757795 1107590 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-688390" in "kube-system" namespace to be "Ready" ...
	I0407 13:30:54.799777 1107590 pod_ready.go:93] pod "kube-controller-manager-embed-certs-688390" in "kube-system" namespace has status "Ready":"True"
	I0407 13:30:54.799852 1107590 pod_ready.go:82] duration metric: took 42.049171ms for pod "kube-controller-manager-embed-certs-688390" in "kube-system" namespace to be "Ready" ...
	I0407 13:30:54.799879 1107590 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-npv7l" in "kube-system" namespace to be "Ready" ...
	I0407 13:30:54.854334 1107590 pod_ready.go:93] pod "kube-proxy-npv7l" in "kube-system" namespace has status "Ready":"True"
	I0407 13:30:54.854417 1107590 pod_ready.go:82] duration metric: took 54.515522ms for pod "kube-proxy-npv7l" in "kube-system" namespace to be "Ready" ...
	I0407 13:30:54.854443 1107590 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-688390" in "kube-system" namespace to be "Ready" ...
	I0407 13:30:55.122265 1107590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.980739539s)
	I0407 13:30:56.879032 1107590 pod_ready.go:103] pod "kube-scheduler-embed-certs-688390" in "kube-system" namespace has status "Ready":"False"
	I0407 13:30:58.362134 1107590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.21376444s)
	I0407 13:30:58.556458 1107590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.895434082s)
	I0407 13:30:58.556501 1107590 addons.go:479] Verifying addon metrics-server=true in "embed-certs-688390"
	I0407 13:30:58.873694 1107590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.767253916s)
	I0407 13:30:58.877112 1107590 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-688390 addons enable metrics-server
	
	I0407 13:30:58.879958 1107590 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0407 13:30:59.228033 1095137 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0407 13:30:59.239760 1095137 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0407 13:30:59.244653 1095137 out.go:201] 
	W0407 13:30:59.247535 1095137 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0407 13:30:59.247763 1095137 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0407 13:30:59.247829 1095137 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0407 13:30:59.247882 1095137 out.go:270] * 
	W0407 13:30:59.248818 1095137 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0407 13:30:59.252447 1095137 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	1df7625042191       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   481fae8da9e40       dashboard-metrics-scraper-8d5bb5db8-52tjd
	2c994f7c46244       ba04bb24b9575       4 minutes ago       Running             storage-provisioner         2                   d900ee90a62f8       storage-provisioner
	3e668dbcb4dd3       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   60ebbca337412       kubernetes-dashboard-cd95d586-nrhfv
	051642364b305       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   c32a143427a1c       coredns-74ff55c5b-gtrrb
	74af9023b4fde       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   0ba7816b7eb92       kube-proxy-j5fsn
	d05c978cfa5a3       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         1                   d900ee90a62f8       storage-provisioner
	e4568189822d4       ee75e27fff91c       5 minutes ago       Running             kindnet-cni                 1                   bb5f824017538       kindnet-8q8nx
	417089f9d6f6e       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   108d027757958       busybox
	04c3bb1dfe7c8       1df8a2b116bd1       5 minutes ago       Running             kube-controller-manager     1                   96c8ae41774db       kube-controller-manager-old-k8s-version-856421
	5864d99cdd47d       05b738aa1bc63       5 minutes ago       Running             etcd                        1                   62cc726a5255d       etcd-old-k8s-version-856421
	a1ef4f8376e9f       2c08bbbc02d3a       5 minutes ago       Running             kube-apiserver              1                   ea7a7c3c55c48       kube-apiserver-old-k8s-version-856421
	d6760ad08162f       e7605f88f17d6       5 minutes ago       Running             kube-scheduler              1                   8d95fd0725913       kube-scheduler-old-k8s-version-856421
	e32207a71e4f1       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   65480f8847710       busybox
	e8606d211bc70       db91994f4ee8f       8 minutes ago       Exited              coredns                     0                   42405f984c851       coredns-74ff55c5b-gtrrb
	b2ada56c528ba       ee75e27fff91c       8 minutes ago       Exited              kindnet-cni                 0                   bd07d66a8baca       kindnet-8q8nx
	77f2619b2d1aa       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   e06826c1a5d6d       kube-proxy-j5fsn
	ad3658b16b264       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   bf93bea3f7b03       etcd-old-k8s-version-856421
	c6f6d481b0f4c       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   acfaf5cdb4249       kube-scheduler-old-k8s-version-856421
	d89f223f22b86       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   8ed927c15dd98       kube-apiserver-old-k8s-version-856421
	2b349ae2ec417       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   d54344d5cb3a1       kube-controller-manager-old-k8s-version-856421
	
	
	==> containerd <==
	Apr 07 13:26:46 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:26:46.916342700Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Apr 07 13:27:04 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:27:04.884652930Z" level=info msg="CreateContainer within sandbox \"481fae8da9e40b0c6b9d7ed57ce568c92b0a7de5077cc9f5b6527a7b29ea0172\" for container name:\"dashboard-metrics-scraper\" attempt:4"
	Apr 07 13:27:04 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:27:04.906988950Z" level=info msg="CreateContainer within sandbox \"481fae8da9e40b0c6b9d7ed57ce568c92b0a7de5077cc9f5b6527a7b29ea0172\" for name:\"dashboard-metrics-scraper\" attempt:4 returns container id \"71f808f85b309809247cc3e4c9e9c18e41d6b6770cd4b0eb5d57f89f5045add4\""
	Apr 07 13:27:04 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:27:04.912783815Z" level=info msg="StartContainer for \"71f808f85b309809247cc3e4c9e9c18e41d6b6770cd4b0eb5d57f89f5045add4\""
	Apr 07 13:27:05 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:27:05.005563627Z" level=info msg="StartContainer for \"71f808f85b309809247cc3e4c9e9c18e41d6b6770cd4b0eb5d57f89f5045add4\" returns successfully"
	Apr 07 13:27:05 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:27:05.005627595Z" level=info msg="received exit event container_id:\"71f808f85b309809247cc3e4c9e9c18e41d6b6770cd4b0eb5d57f89f5045add4\" id:\"71f808f85b309809247cc3e4c9e9c18e41d6b6770cd4b0eb5d57f89f5045add4\" pid:3047 exit_status:255 exited_at:{seconds:1744032425 nanos:4710043}"
	Apr 07 13:27:05 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:27:05.036080647Z" level=info msg="shim disconnected" id=71f808f85b309809247cc3e4c9e9c18e41d6b6770cd4b0eb5d57f89f5045add4 namespace=k8s.io
	Apr 07 13:27:05 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:27:05.036121812Z" level=warning msg="cleaning up after shim disconnected" id=71f808f85b309809247cc3e4c9e9c18e41d6b6770cd4b0eb5d57f89f5045add4 namespace=k8s.io
	Apr 07 13:27:05 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:27:05.036162797Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Apr 07 13:27:05 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:27:05.453219446Z" level=info msg="RemoveContainer for \"223e8b72be32919eadd5acb820b4dd7b7c1450a6869493c759e2c9e8529c8d75\""
	Apr 07 13:27:05 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:27:05.460589818Z" level=info msg="RemoveContainer for \"223e8b72be32919eadd5acb820b4dd7b7c1450a6869493c759e2c9e8529c8d75\" returns successfully"
	Apr 07 13:28:15 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:28:15.882656938Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 07 13:28:15 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:28:15.892524829Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Apr 07 13:28:15 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:28:15.894628006Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Apr 07 13:28:15 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:28:15.894648896Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Apr 07 13:28:35 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:28:35.884129413Z" level=info msg="CreateContainer within sandbox \"481fae8da9e40b0c6b9d7ed57ce568c92b0a7de5077cc9f5b6527a7b29ea0172\" for container name:\"dashboard-metrics-scraper\" attempt:5"
	Apr 07 13:28:35 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:28:35.903590328Z" level=info msg="CreateContainer within sandbox \"481fae8da9e40b0c6b9d7ed57ce568c92b0a7de5077cc9f5b6527a7b29ea0172\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"1df7625042191f4e3be7c9854e1ca36378248104241854b01edb494d6c420c34\""
	Apr 07 13:28:35 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:28:35.904500191Z" level=info msg="StartContainer for \"1df7625042191f4e3be7c9854e1ca36378248104241854b01edb494d6c420c34\""
	Apr 07 13:28:35 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:28:35.981178739Z" level=info msg="StartContainer for \"1df7625042191f4e3be7c9854e1ca36378248104241854b01edb494d6c420c34\" returns successfully"
	Apr 07 13:28:35 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:28:35.983860402Z" level=info msg="received exit event container_id:\"1df7625042191f4e3be7c9854e1ca36378248104241854b01edb494d6c420c34\" id:\"1df7625042191f4e3be7c9854e1ca36378248104241854b01edb494d6c420c34\" pid:3304 exit_status:255 exited_at:{seconds:1744032515 nanos:983628891}"
	Apr 07 13:28:36 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:28:36.025492793Z" level=info msg="shim disconnected" id=1df7625042191f4e3be7c9854e1ca36378248104241854b01edb494d6c420c34 namespace=k8s.io
	Apr 07 13:28:36 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:28:36.025536527Z" level=warning msg="cleaning up after shim disconnected" id=1df7625042191f4e3be7c9854e1ca36378248104241854b01edb494d6c420c34 namespace=k8s.io
	Apr 07 13:28:36 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:28:36.025695447Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Apr 07 13:28:36 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:28:36.685305910Z" level=info msg="RemoveContainer for \"71f808f85b309809247cc3e4c9e9c18e41d6b6770cd4b0eb5d57f89f5045add4\""
	Apr 07 13:28:36 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:28:36.698770251Z" level=info msg="RemoveContainer for \"71f808f85b309809247cc3e4c9e9c18e41d6b6770cd4b0eb5d57f89f5045add4\" returns successfully"
	
	
	==> coredns [051642364b305742215427b11bf59d630eb773d7d2d8489872ccb8bafb97a33a] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:60252 - 3125 "HINFO IN 8244431149089733818.7730825843377024063. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013087223s
	
	
	==> coredns [e8606d211bc70838dc98631de3d337f0689089d54c4dd0963c66fed159c72bce] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:56619 - 25099 "HINFO IN 8395759530412048856.6897807622872564169. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030681142s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-856421
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-856421
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=33e6edc58d2014d70e908473920ef4ac8eae1e43
	                    minikube.k8s.io/name=old-k8s-version-856421
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_07T13_22_15_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Apr 2025 13:22:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-856421
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Apr 2025 13:30:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Apr 2025 13:26:04 +0000   Mon, 07 Apr 2025 13:22:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Apr 2025 13:26:04 +0000   Mon, 07 Apr 2025 13:22:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Apr 2025 13:26:04 +0000   Mon, 07 Apr 2025 13:22:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Apr 2025 13:26:04 +0000   Mon, 07 Apr 2025 13:22:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-856421
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 4795b06879444d91806fbc5506b71cbf
	  System UUID:                e1be3290-981d-4c45-832d-195b60a8715e
	  Boot ID:                    23ff30ac-10fb-424b-be6b-3b05e144d397
	  Kernel Version:             5.15.0-1081-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 coredns-74ff55c5b-gtrrb                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m32s
	  kube-system                 etcd-old-k8s-version-856421                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m38s
	  kube-system                 kindnet-8q8nx                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m32s
	  kube-system                 kube-apiserver-old-k8s-version-856421             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m38s
	  kube-system                 kube-controller-manager-old-k8s-version-856421    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m38s
	  kube-system                 kube-proxy-j5fsn                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m32s
	  kube-system                 kube-scheduler-old-k8s-version-856421             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m38s
	  kube-system                 metrics-server-9975d5f86-tkvrz                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m27s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m30s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-52tjd         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m30s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-nrhfv               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 8m58s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m58s (x5 over 8m58s)  kubelet     Node old-k8s-version-856421 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m58s (x5 over 8m58s)  kubelet     Node old-k8s-version-856421 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m58s (x5 over 8m58s)  kubelet     Node old-k8s-version-856421 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m58s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 8m39s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m39s                  kubelet     Node old-k8s-version-856421 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m39s                  kubelet     Node old-k8s-version-856421 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m39s                  kubelet     Node old-k8s-version-856421 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m39s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m32s                  kubelet     Node old-k8s-version-856421 status is now: NodeReady
	  Normal  Starting                 8m30s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 5m59s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m59s (x8 over 5m59s)  kubelet     Node old-k8s-version-856421 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m59s (x7 over 5m59s)  kubelet     Node old-k8s-version-856421 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m59s (x8 over 5m59s)  kubelet     Node old-k8s-version-856421 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m59s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m44s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Apr 7 12:09] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [5864d99cdd47d055aee6b1526bb1c6a1ee5862850613e2d63e0d93e141c7f4d5] <==
	2025-04-07 13:26:53.507201 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:27:03.507187 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:27:13.507293 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:27:23.507328 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:27:33.507332 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:27:43.507522 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:27:53.507143 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:28:03.507327 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:28:13.507145 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:28:23.507326 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:28:33.507385 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:28:43.507079 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:28:53.507145 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:29:03.507315 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:29:13.507097 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:29:23.507409 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:29:33.507200 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:29:43.507187 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:29:53.507302 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:30:03.507177 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:30:13.507347 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:30:23.507588 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:30:33.507113 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:30:43.507206 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:30:53.507207 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [ad3658b16b26420a5c8d95f062de8e38e1c9d8d5235a77bf2a50e7228aabb735] <==
	raft2025/04/07 13:22:04 INFO: ea7e25599daad906 became candidate at term 2
	raft2025/04/07 13:22:04 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	raft2025/04/07 13:22:04 INFO: ea7e25599daad906 became leader at term 2
	raft2025/04/07 13:22:04 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2025-04-07 13:22:04.648270 I | etcdserver: published {Name:old-k8s-version-856421 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2025-04-07 13:22:04.648524 I | etcdserver: setting up the initial cluster version to 3.4
	2025-04-07 13:22:04.648597 I | embed: ready to serve client requests
	2025-04-07 13:22:04.649943 I | embed: serving client requests on 192.168.76.2:2379
	2025-04-07 13:22:04.650065 I | embed: ready to serve client requests
	2025-04-07 13:22:04.651141 I | embed: serving client requests on 127.0.0.1:2379
	2025-04-07 13:22:04.710980 N | etcdserver/membership: set the initial cluster version to 3.4
	2025-04-07 13:22:04.711115 I | etcdserver/api: enabled capabilities for version 3.4
	2025-04-07 13:22:32.926535 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:22:41.923526 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:22:51.923483 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:23:01.923676 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:23:11.923483 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:23:21.925490 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:23:31.923844 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:23:41.923649 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:23:51.925212 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:24:01.923755 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:24:11.923561 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:24:21.923516 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:24:31.923549 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 13:31:01 up  5:13,  0 users,  load average: 5.07, 2.85, 2.83
	Linux old-k8s-version-856421 5.15.0-1081-aws #88~20.04.1-Ubuntu SMP Fri Mar 28 14:48:25 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [b2ada56c528ba0083dc91efb8ea35b5b75cc614fa96a6cd93819b45a72ee05fb] <==
	I0407 13:22:33.229878       1 controller.go:401] Syncing nftables rules
	I0407 13:22:43.052518       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0407 13:22:43.052597       1 main.go:301] handling current node
	I0407 13:22:53.043506       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0407 13:22:53.043732       1 main.go:301] handling current node
	I0407 13:23:03.052526       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0407 13:23:03.052648       1 main.go:301] handling current node
	I0407 13:23:13.051860       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0407 13:23:13.051893       1 main.go:301] handling current node
	I0407 13:23:23.044874       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0407 13:23:23.044909       1 main.go:301] handling current node
	I0407 13:23:33.044055       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0407 13:23:33.044100       1 main.go:301] handling current node
	I0407 13:23:43.049760       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0407 13:23:43.049804       1 main.go:301] handling current node
	I0407 13:23:53.045909       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0407 13:23:53.045945       1 main.go:301] handling current node
	I0407 13:24:03.047545       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0407 13:24:03.047581       1 main.go:301] handling current node
	I0407 13:24:13.051189       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0407 13:24:13.051224       1 main.go:301] handling current node
	I0407 13:24:23.043066       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0407 13:24:23.043101       1 main.go:301] handling current node
	I0407 13:24:33.043076       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0407 13:24:33.043114       1 main.go:301] handling current node
	
	
	==> kindnet [e4568189822d430c58ad9f0a391ab15cbb27395e1408248c2c03f19d5dc9150b] <==
	I0407 13:28:57.156599       1 main.go:301] handling current node
	I0407 13:29:07.154068       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0407 13:29:07.154104       1 main.go:301] handling current node
	I0407 13:29:17.147542       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0407 13:29:17.147610       1 main.go:301] handling current node
	I0407 13:29:27.153958       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0407 13:29:27.154067       1 main.go:301] handling current node
	I0407 13:29:37.153814       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0407 13:29:37.153854       1 main.go:301] handling current node
	I0407 13:29:47.147760       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0407 13:29:47.147797       1 main.go:301] handling current node
	I0407 13:29:57.154535       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0407 13:29:57.154575       1 main.go:301] handling current node
	I0407 13:30:07.152842       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0407 13:30:07.152880       1 main.go:301] handling current node
	I0407 13:30:17.147951       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0407 13:30:17.148061       1 main.go:301] handling current node
	I0407 13:30:27.153805       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0407 13:30:27.153843       1 main.go:301] handling current node
	I0407 13:30:37.153790       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0407 13:30:37.153827       1 main.go:301] handling current node
	I0407 13:30:47.147599       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0407 13:30:47.147636       1 main.go:301] handling current node
	I0407 13:30:57.154584       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0407 13:30:57.154621       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a1ef4f8376e9fc950587e9d4daa95427d19c48e4606813b7ce87102e58fd46af] <==
	I0407 13:27:31.767346       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0407 13:27:31.767356       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0407 13:28:10.812872       1 client.go:360] parsed scheme: "passthrough"
	I0407 13:28:10.813105       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0407 13:28:10.813193       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0407 13:28:17.207074       1 handler_proxy.go:102] no RequestInfo found in the context
	E0407 13:28:17.207329       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0407 13:28:17.207347       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0407 13:28:42.326737       1 client.go:360] parsed scheme: "passthrough"
	I0407 13:28:42.326790       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0407 13:28:42.326799       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0407 13:29:12.498697       1 client.go:360] parsed scheme: "passthrough"
	I0407 13:29:12.498741       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0407 13:29:12.498752       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0407 13:29:49.514099       1 client.go:360] parsed scheme: "passthrough"
	I0407 13:29:49.514153       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0407 13:29:49.514162       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0407 13:30:15.312399       1 handler_proxy.go:102] no RequestInfo found in the context
	E0407 13:30:15.312609       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0407 13:30:15.312625       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0407 13:30:27.022798       1 client.go:360] parsed scheme: "passthrough"
	I0407 13:30:27.022845       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0407 13:30:27.022853       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [d89f223f22b8606c12ec0e778fa21d2abe90349e214332d4a1fa32b45f514d9b] <==
	I0407 13:22:11.828581       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0407 13:22:11.828605       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0407 13:22:12.381249       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0407 13:22:12.433164       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0407 13:22:12.506657       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0407 13:22:12.507971       1 controller.go:606] quota admission added evaluator for: endpoints
	I0407 13:22:12.515021       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0407 13:22:12.833391       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0407 13:22:13.440319       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0407 13:22:14.271049       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0407 13:22:14.322927       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0407 13:22:29.363288       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0407 13:22:29.451983       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0407 13:22:42.549185       1 client.go:360] parsed scheme: "passthrough"
	I0407 13:22:42.549235       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0407 13:22:42.549244       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0407 13:23:13.659861       1 client.go:360] parsed scheme: "passthrough"
	I0407 13:23:13.660064       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0407 13:23:13.660158       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0407 13:23:50.556353       1 client.go:360] parsed scheme: "passthrough"
	I0407 13:23:50.556401       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0407 13:23:50.556409       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0407 13:24:30.627016       1 client.go:360] parsed scheme: "passthrough"
	I0407 13:24:30.627061       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0407 13:24:30.627070       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [04c3bb1dfe7c8d458ce5ec56757e932f8a599e0d467221dd4b85ef6a3fefa6df] <==
	W0407 13:26:37.427371       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0407 13:27:03.457165       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0407 13:27:09.078011       1 request.go:655] Throttling request took 1.0483435s, request: GET:https://192.168.76.2:8443/apis/storage.k8s.io/v1?timeout=32s
	W0407 13:27:09.929506       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0407 13:27:33.959144       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0407 13:27:41.580120       1 request.go:655] Throttling request took 1.047299102s, request: GET:https://192.168.76.2:8443/apis/networking.k8s.io/v1beta1?timeout=32s
	W0407 13:27:42.431730       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0407 13:28:04.460878       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0407 13:28:14.082286       1 request.go:655] Throttling request took 1.048367247s, request: GET:https://192.168.76.2:8443/apis/batch/v1beta1?timeout=32s
	W0407 13:28:14.933999       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0407 13:28:34.962737       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0407 13:28:46.584632       1 request.go:655] Throttling request took 1.048371477s, request: GET:https://192.168.76.2:8443/apis/events.k8s.io/v1beta1?timeout=32s
	W0407 13:28:47.436161       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0407 13:29:05.464598       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0407 13:29:19.086503       1 request.go:655] Throttling request took 1.048136871s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0407 13:29:19.938099       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0407 13:29:35.968010       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0407 13:29:51.588654       1 request.go:655] Throttling request took 1.048422407s, request: GET:https://192.168.76.2:8443/apis/admissionregistration.k8s.io/v1beta1?timeout=32s
	W0407 13:29:52.440210       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0407 13:30:06.470097       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0407 13:30:24.090645       1 request.go:655] Throttling request took 1.048351001s, request: GET:https://192.168.76.2:8443/apis/batch/v1beta1?timeout=32s
	W0407 13:30:24.942143       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0407 13:30:36.972212       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0407 13:30:56.592508       1 request.go:655] Throttling request took 1.04815208s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0407 13:30:57.444165       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-controller-manager [2b349ae2ec417a9d000e61e8c66afdcccb2d202d992eec33826df20e410f87a2] <==
	I0407 13:22:29.417785       1 shared_informer.go:247] Caches are synced for service account 
	I0407 13:22:29.421628       1 shared_informer.go:247] Caches are synced for disruption 
	I0407 13:22:29.421651       1 disruption.go:339] Sending events to api server.
	I0407 13:22:29.428330       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-kfvbc"
	I0407 13:22:29.438507       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0407 13:22:29.448040       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-gtrrb"
	I0407 13:22:29.486374       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0407 13:22:29.492899       1 shared_informer.go:247] Caches are synced for resource quota 
	I0407 13:22:29.496575       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-j5fsn"
	I0407 13:22:29.512522       1 shared_informer.go:247] Caches are synced for resource quota 
	I0407 13:22:29.529201       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-8q8nx"
	E0407 13:22:29.600347       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"300f6abd-8358-438e-ac2d-b30583f29332", ResourceVersion:"281", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63879628934, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x40012b09e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x40012b0a00)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0x40012b0a20), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x40014b8a40), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40012b0
a40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40012b0a60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40012b0aa0)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x400135f920), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000f09998), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x400017c930), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400000ef08)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000f099e8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0407 13:22:29.603156       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0407 13:22:29.609186       1 shared_informer.go:247] Caches are synced for attach detach 
	E0407 13:22:29.632318       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	E0407 13:22:29.648318       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I0407 13:22:29.759951       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0407 13:22:30.061251       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0407 13:22:30.104396       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0407 13:22:30.104423       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0407 13:22:30.977654       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0407 13:22:31.079373       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-kfvbc"
	I0407 13:22:34.339759       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0407 13:24:33.248179       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	E0407 13:24:33.318012       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-proxy [74af9023b4fde2bd349b21c976719ba85dda1ea03171badab844478b8cbf2088] <==
	I0407 13:25:16.994516       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0407 13:25:16.994916       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0407 13:25:17.025961       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0407 13:25:17.026513       1 server_others.go:185] Using iptables Proxier.
	I0407 13:25:17.026906       1 server.go:650] Version: v1.20.0
	I0407 13:25:17.029508       1 config.go:315] Starting service config controller
	I0407 13:25:17.029630       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0407 13:25:17.033242       1 config.go:224] Starting endpoint slice config controller
	I0407 13:25:17.034908       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0407 13:25:17.134535       1 shared_informer.go:247] Caches are synced for service config 
	I0407 13:25:17.135192       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-proxy [77f2619b2d1aad99f71633fed74393b3b19f570432847d24bea319ce3f3cc54b] <==
	I0407 13:22:31.411656       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0407 13:22:31.411763       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0407 13:22:31.445345       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0407 13:22:31.445438       1 server_others.go:185] Using iptables Proxier.
	I0407 13:22:31.445658       1 server.go:650] Version: v1.20.0
	I0407 13:22:31.447571       1 config.go:315] Starting service config controller
	I0407 13:22:31.447586       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0407 13:22:31.448041       1 config.go:224] Starting endpoint slice config controller
	I0407 13:22:31.448047       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0407 13:22:31.548108       1 shared_informer.go:247] Caches are synced for service config 
	I0407 13:22:31.548453       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [c6f6d481b0f4c36b1a380741fe1a1dc69435fb844001d3a69feaa1fddcdd4030] <==
	W0407 13:22:11.058570       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0407 13:22:11.058598       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0407 13:22:11.059053       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0407 13:22:11.126593       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0407 13:22:11.126806       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0407 13:22:11.126853       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0407 13:22:11.126888       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0407 13:22:11.136255       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0407 13:22:11.136582       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0407 13:22:11.144442       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0407 13:22:11.144707       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0407 13:22:11.145976       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0407 13:22:11.146240       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0407 13:22:11.147946       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0407 13:22:11.148268       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0407 13:22:11.155354       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0407 13:22:11.155675       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0407 13:22:11.155912       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0407 13:22:11.156240       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0407 13:22:12.068209       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0407 13:22:12.078072       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0407 13:22:12.102133       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0407 13:22:12.131491       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0407 13:22:12.208064       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0407 13:22:12.526927       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [d6760ad08162f8ec381c7feef6ca753faf89ef2bb6b6aaf4ad3420559e5f73e7] <==
	I0407 13:25:08.117773       1 serving.go:331] Generated self-signed cert in-memory
	W0407 13:25:14.181318       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0407 13:25:14.182983       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0407 13:25:14.183067       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0407 13:25:14.183145       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0407 13:25:14.470287       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0407 13:25:14.472013       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0407 13:25:14.472033       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0407 13:25:14.472048       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0407 13:25:14.573899       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Apr 07 13:29:15 old-k8s-version-856421 kubelet[667]: E0407 13:29:15.882105     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 07 13:29:19 old-k8s-version-856421 kubelet[667]: I0407 13:29:19.881302     667 scope.go:95] [topologymanager] RemoveContainer - Container ID: 1df7625042191f4e3be7c9854e1ca36378248104241854b01edb494d6c420c34
	Apr 07 13:29:19 old-k8s-version-856421 kubelet[667]: E0407 13:29:19.881643     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	Apr 07 13:29:28 old-k8s-version-856421 kubelet[667]: E0407 13:29:28.884253     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 07 13:29:32 old-k8s-version-856421 kubelet[667]: I0407 13:29:32.881668     667 scope.go:95] [topologymanager] RemoveContainer - Container ID: 1df7625042191f4e3be7c9854e1ca36378248104241854b01edb494d6c420c34
	Apr 07 13:29:32 old-k8s-version-856421 kubelet[667]: E0407 13:29:32.882068     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	Apr 07 13:29:39 old-k8s-version-856421 kubelet[667]: E0407 13:29:39.883031     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 07 13:29:47 old-k8s-version-856421 kubelet[667]: I0407 13:29:47.881357     667 scope.go:95] [topologymanager] RemoveContainer - Container ID: 1df7625042191f4e3be7c9854e1ca36378248104241854b01edb494d6c420c34
	Apr 07 13:29:47 old-k8s-version-856421 kubelet[667]: E0407 13:29:47.882527     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	Apr 07 13:29:50 old-k8s-version-856421 kubelet[667]: E0407 13:29:50.882583     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 07 13:29:59 old-k8s-version-856421 kubelet[667]: I0407 13:29:59.881312     667 scope.go:95] [topologymanager] RemoveContainer - Container ID: 1df7625042191f4e3be7c9854e1ca36378248104241854b01edb494d6c420c34
	Apr 07 13:29:59 old-k8s-version-856421 kubelet[667]: E0407 13:29:59.882436     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	Apr 07 13:30:01 old-k8s-version-856421 kubelet[667]: E0407 13:30:01.885267     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 07 13:30:10 old-k8s-version-856421 kubelet[667]: I0407 13:30:10.881624     667 scope.go:95] [topologymanager] RemoveContainer - Container ID: 1df7625042191f4e3be7c9854e1ca36378248104241854b01edb494d6c420c34
	Apr 07 13:30:10 old-k8s-version-856421 kubelet[667]: E0407 13:30:10.882871     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	Apr 07 13:30:15 old-k8s-version-856421 kubelet[667]: E0407 13:30:15.882153     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 07 13:30:22 old-k8s-version-856421 kubelet[667]: I0407 13:30:22.881884     667 scope.go:95] [topologymanager] RemoveContainer - Container ID: 1df7625042191f4e3be7c9854e1ca36378248104241854b01edb494d6c420c34
	Apr 07 13:30:22 old-k8s-version-856421 kubelet[667]: E0407 13:30:22.882311     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	Apr 07 13:30:28 old-k8s-version-856421 kubelet[667]: E0407 13:30:28.882142     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 07 13:30:35 old-k8s-version-856421 kubelet[667]: I0407 13:30:35.885009     667 scope.go:95] [topologymanager] RemoveContainer - Container ID: 1df7625042191f4e3be7c9854e1ca36378248104241854b01edb494d6c420c34
	Apr 07 13:30:35 old-k8s-version-856421 kubelet[667]: E0407 13:30:35.885355     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	Apr 07 13:30:41 old-k8s-version-856421 kubelet[667]: E0407 13:30:41.882269     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 07 13:30:49 old-k8s-version-856421 kubelet[667]: I0407 13:30:49.881277     667 scope.go:95] [topologymanager] RemoveContainer - Container ID: 1df7625042191f4e3be7c9854e1ca36378248104241854b01edb494d6c420c34
	Apr 07 13:30:49 old-k8s-version-856421 kubelet[667]: E0407 13:30:49.882356     667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
	Apr 07 13:30:53 old-k8s-version-856421 kubelet[667]: E0407 13:30:53.882047     667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> kubernetes-dashboard [3e668dbcb4dd3fef6e58de6d11f5522dbfad76947bfe7bab5fda3d0228b2f625] <==
	2025/04/07 13:25:36 Starting overwatch
	2025/04/07 13:25:36 Using namespace: kubernetes-dashboard
	2025/04/07 13:25:36 Using in-cluster config to connect to apiserver
	2025/04/07 13:25:36 Using secret token for csrf signing
	2025/04/07 13:25:36 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/04/07 13:25:36 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/04/07 13:25:36 Successful initial request to the apiserver, version: v1.20.0
	2025/04/07 13:25:36 Generating JWE encryption key
	2025/04/07 13:25:36 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/04/07 13:25:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/04/07 13:25:37 Initializing JWE encryption key from synchronized object
	2025/04/07 13:25:37 Creating in-cluster Sidecar client
	2025/04/07 13:25:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/04/07 13:25:37 Serving insecurely on HTTP port: 9090
	2025/04/07 13:26:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/04/07 13:26:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/04/07 13:27:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/04/07 13:27:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/04/07 13:28:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/04/07 13:28:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/04/07 13:29:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/04/07 13:29:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/04/07 13:30:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/04/07 13:30:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [2c994f7c46244b75a5d9fce69b3c3f15c9a2ce0a534a675952a6506d4c18ba61] <==
	I0407 13:26:02.106687       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0407 13:26:02.192476       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0407 13:26:02.193247       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0407 13:26:19.680789       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0407 13:26:19.680959       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-856421_f959ad1a-c52c-4bcb-af4f-c159208f638a!
	I0407 13:26:19.684640       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"be93a60a-1bee-444f-ada8-fa2850a45a39", APIVersion:"v1", ResourceVersion:"862", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-856421_f959ad1a-c52c-4bcb-af4f-c159208f638a became leader
	I0407 13:26:19.784206       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-856421_f959ad1a-c52c-4bcb-af4f-c159208f638a!
	
	
	==> storage-provisioner [d05c978cfa5a3d8c7465b05f0087ddff9b59b6291d3e17156779dbb770748849] <==
	I0407 13:25:16.629182       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0407 13:25:46.637924       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-856421 -n old-k8s-version-856421
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-856421 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-tkvrz
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-856421 describe pod metrics-server-9975d5f86-tkvrz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-856421 describe pod metrics-server-9975d5f86-tkvrz: exit status 1 (122.11588ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-tkvrz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-856421 describe pod metrics-server-9975d5f86-tkvrz: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (377.66s)

                                                
                                    

Test pass (300/331)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 5.78
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.4
9 TestDownloadOnly/v1.20.0/DeleteAll 0.28
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.32.2/json-events 4.84
13 TestDownloadOnly/v1.32.2/preload-exists 0
17 TestDownloadOnly/v1.32.2/LogsDuration 0.09
18 TestDownloadOnly/v1.32.2/DeleteAll 0.22
19 TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.67
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 217.27
29 TestAddons/serial/Volcano 40.37
31 TestAddons/serial/GCPAuth/Namespaces 0.21
32 TestAddons/serial/GCPAuth/FakeCredentials 9.92
35 TestAddons/parallel/Registry 14.99
36 TestAddons/parallel/Ingress 19.88
37 TestAddons/parallel/InspektorGadget 12
38 TestAddons/parallel/MetricsServer 6.82
40 TestAddons/parallel/CSI 33.31
41 TestAddons/parallel/Headlamp 17.28
42 TestAddons/parallel/CloudSpanner 5.91
43 TestAddons/parallel/LocalPath 52.44
44 TestAddons/parallel/NvidiaDevicePlugin 5.58
45 TestAddons/parallel/Yakd 11.84
47 TestAddons/StoppedEnableDisable 12.28
48 TestCertOptions 35.36
49 TestCertExpiration 226.01
51 TestForceSystemdFlag 34.86
52 TestForceSystemdEnv 45.69
53 TestDockerEnvContainerd 45.97
58 TestErrorSpam/setup 33.39
59 TestErrorSpam/start 0.78
60 TestErrorSpam/status 1.12
61 TestErrorSpam/pause 1.91
62 TestErrorSpam/unpause 1.87
63 TestErrorSpam/stop 1.49
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 62.15
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.2
70 TestFunctional/serial/KubeContext 0.08
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 4.25
75 TestFunctional/serial/CacheCmd/cache/add_local 1.32
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.06
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.15
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
83 TestFunctional/serial/ExtraConfig 44.73
84 TestFunctional/serial/ComponentHealth 0.11
85 TestFunctional/serial/LogsCmd 1.79
86 TestFunctional/serial/LogsFileCmd 1.8
87 TestFunctional/serial/InvalidService 4.27
89 TestFunctional/parallel/ConfigCmd 0.46
90 TestFunctional/parallel/DashboardCmd 14.02
91 TestFunctional/parallel/DryRun 0.47
92 TestFunctional/parallel/InternationalLanguage 0.27
93 TestFunctional/parallel/StatusCmd 1.32
97 TestFunctional/parallel/ServiceCmdConnect 9.73
98 TestFunctional/parallel/AddonsCmd 0.24
99 TestFunctional/parallel/PersistentVolumeClaim 27.24
101 TestFunctional/parallel/SSHCmd 0.79
102 TestFunctional/parallel/CpCmd 2.49
104 TestFunctional/parallel/FileSync 0.32
105 TestFunctional/parallel/CertSync 2.18
109 TestFunctional/parallel/NodeLabels 0.12
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.54
113 TestFunctional/parallel/License 0.26
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.75
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.47
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.1
125 TestFunctional/parallel/ServiceCmd/DeployApp 8.24
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.48
127 TestFunctional/parallel/ServiceCmd/List 0.62
128 TestFunctional/parallel/ProfileCmd/profile_list 0.54
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.64
130 TestFunctional/parallel/ProfileCmd/profile_json_output 0.58
131 TestFunctional/parallel/ServiceCmd/HTTPS 0.59
132 TestFunctional/parallel/MountCmd/any-port 8.54
133 TestFunctional/parallel/ServiceCmd/Format 0.42
134 TestFunctional/parallel/ServiceCmd/URL 0.47
135 TestFunctional/parallel/MountCmd/specific-port 2.42
136 TestFunctional/parallel/MountCmd/VerifyCleanup 2.4
137 TestFunctional/parallel/Version/short 0.08
138 TestFunctional/parallel/Version/components 1.41
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
143 TestFunctional/parallel/ImageCommands/ImageBuild 5.39
144 TestFunctional/parallel/ImageCommands/Setup 0.86
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.21
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.19
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.67
148 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
149 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
150 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.23
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.47
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.61
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.78
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.46
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 122.04
163 TestMultiControlPlane/serial/DeployApp 33.88
164 TestMultiControlPlane/serial/PingHostFromPods 1.69
165 TestMultiControlPlane/serial/AddWorkerNode 21.05
166 TestMultiControlPlane/serial/NodeLabels 0.12
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.12
168 TestMultiControlPlane/serial/CopyFile 19.94
169 TestMultiControlPlane/serial/StopSecondaryNode 12.9
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.79
171 TestMultiControlPlane/serial/RestartSecondaryNode 18.48
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.02
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 135.95
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.76
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.75
176 TestMultiControlPlane/serial/StopCluster 36.11
177 TestMultiControlPlane/serial/RestartCluster 63.37
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.78
179 TestMultiControlPlane/serial/AddSecondaryNode 46.4
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.04
184 TestJSONOutput/start/Command 49.86
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Command 0.78
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Command 0.72
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 5.84
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.26
209 TestKicCustomNetwork/create_custom_network 38.29
210 TestKicCustomNetwork/use_default_bridge_network 34.35
211 TestKicExistingNetwork 35.9
212 TestKicCustomSubnet 36.94
213 TestKicStaticIP 35.01
214 TestMainNoArgs 0.06
215 TestMinikubeProfile 67.66
218 TestMountStart/serial/StartWithMountFirst 6.12
219 TestMountStart/serial/VerifyMountFirst 0.28
220 TestMountStart/serial/StartWithMountSecond 6.66
221 TestMountStart/serial/VerifyMountSecond 0.26
222 TestMountStart/serial/DeleteFirst 1.64
223 TestMountStart/serial/VerifyMountPostDelete 0.26
224 TestMountStart/serial/Stop 1.23
225 TestMountStart/serial/RestartStopped 7.68
226 TestMountStart/serial/VerifyMountPostStop 0.27
229 TestMultiNode/serial/FreshStart2Nodes 65.77
230 TestMultiNode/serial/DeployApp2Nodes 20.84
231 TestMultiNode/serial/PingHostFrom2Pods 1.1
232 TestMultiNode/serial/AddNode 17.25
233 TestMultiNode/serial/MultiNodeLabels 0.11
234 TestMultiNode/serial/ProfileList 0.99
235 TestMultiNode/serial/CopyFile 10.24
236 TestMultiNode/serial/StopNode 2.23
237 TestMultiNode/serial/StartAfterStop 9.56
238 TestMultiNode/serial/RestartKeepsNodes 82.34
239 TestMultiNode/serial/DeleteNode 5.37
240 TestMultiNode/serial/StopMultiNode 24.08
241 TestMultiNode/serial/RestartMultiNode 46.93
242 TestMultiNode/serial/ValidateNameConflict 35
247 TestPreload 120.89
249 TestScheduledStopUnix 106.44
252 TestInsufficientStorage 11.52
253 TestRunningBinaryUpgrade 85.96
255 TestKubernetesUpgrade 352.93
256 TestMissingContainerUpgrade 201.17
258 TestPause/serial/Start 67.49
259 TestPause/serial/SecondStartNoReconfiguration 7.02
260 TestPause/serial/Pause 1.06
261 TestPause/serial/VerifyStatus 0.46
262 TestPause/serial/Unpause 0.89
263 TestPause/serial/PauseAgain 1.19
264 TestPause/serial/DeletePaused 3.2
265 TestPause/serial/VerifyDeletedResources 6.49
266 TestStoppedBinaryUpgrade/Setup 0.73
267 TestStoppedBinaryUpgrade/Upgrade 107.87
268 TestStoppedBinaryUpgrade/MinikubeLogs 1.39
277 TestNoKubernetes/serial/StartNoK8sWithVersion 0.13
278 TestNoKubernetes/serial/StartWithK8s 42.89
286 TestNetworkPlugins/group/false 4.36
290 TestNoKubernetes/serial/StartWithStopK8s 18.82
291 TestNoKubernetes/serial/Start 5.74
292 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
293 TestNoKubernetes/serial/ProfileList 1.07
294 TestNoKubernetes/serial/Stop 1.23
295 TestNoKubernetes/serial/StartNoArgs 7.04
296 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
298 TestStartStop/group/old-k8s-version/serial/FirstStart 173.88
300 TestStartStop/group/no-preload/serial/FirstStart 73.7
301 TestStartStop/group/no-preload/serial/DeployApp 9.52
302 TestStartStop/group/old-k8s-version/serial/DeployApp 8.59
303 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.21
304 TestStartStop/group/no-preload/serial/Stop 12.09
305 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.04
306 TestStartStop/group/old-k8s-version/serial/Stop 12.27
307 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
308 TestStartStop/group/no-preload/serial/SecondStart 268.31
309 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.26
311 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
312 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
313 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
314 TestStartStop/group/no-preload/serial/Pause 3.22
316 TestStartStop/group/embed-certs/serial/FirstStart 50.77
317 TestStartStop/group/embed-certs/serial/DeployApp 9.36
318 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.17
319 TestStartStop/group/embed-certs/serial/Stop 12.1
320 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
321 TestStartStop/group/embed-certs/serial/SecondStart 274.8
322 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
323 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.13
324 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.28
325 TestStartStop/group/old-k8s-version/serial/Pause 3.12
327 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 64.42
328 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.35
329 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.12
330 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.15
331 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
332 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 279.99
333 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
334 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
335 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
336 TestStartStop/group/embed-certs/serial/Pause 3.21
338 TestStartStop/group/newest-cni/serial/FirstStart 37.93
339 TestStartStop/group/newest-cni/serial/DeployApp 0
340 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.33
341 TestStartStop/group/newest-cni/serial/Stop 1.26
342 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
343 TestStartStop/group/newest-cni/serial/SecondStart 15.7
344 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
345 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
346 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
347 TestStartStop/group/newest-cni/serial/Pause 3.1
348 TestNetworkPlugins/group/auto/Start 66.75
349 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
350 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
351 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.3
352 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.64
353 TestNetworkPlugins/group/auto/KubeletFlags 0.39
354 TestNetworkPlugins/group/auto/NetCatPod 11.35
355 TestNetworkPlugins/group/kindnet/Start 67.62
356 TestNetworkPlugins/group/auto/DNS 0.48
357 TestNetworkPlugins/group/auto/Localhost 0.39
358 TestNetworkPlugins/group/auto/HairPin 0.21
359 TestNetworkPlugins/group/calico/Start 64.33
360 TestNetworkPlugins/group/kindnet/ControllerPod 6
361 TestNetworkPlugins/group/kindnet/KubeletFlags 0.38
362 TestNetworkPlugins/group/kindnet/NetCatPod 11.34
363 TestNetworkPlugins/group/kindnet/DNS 0.32
364 TestNetworkPlugins/group/kindnet/Localhost 0.25
365 TestNetworkPlugins/group/kindnet/HairPin 0.24
366 TestNetworkPlugins/group/calico/ControllerPod 6.01
367 TestNetworkPlugins/group/calico/KubeletFlags 0.42
368 TestNetworkPlugins/group/calico/NetCatPod 10.41
369 TestNetworkPlugins/group/custom-flannel/Start 57.89
370 TestNetworkPlugins/group/calico/DNS 0.22
371 TestNetworkPlugins/group/calico/Localhost 0.2
372 TestNetworkPlugins/group/calico/HairPin 0.21
373 TestNetworkPlugins/group/enable-default-cni/Start 52.38
374 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.4
375 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.43
376 TestNetworkPlugins/group/custom-flannel/DNS 0.26
377 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
378 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
379 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.44
380 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.4
381 TestNetworkPlugins/group/flannel/Start 55.06
382 TestNetworkPlugins/group/enable-default-cni/DNS 0.36
383 TestNetworkPlugins/group/enable-default-cni/Localhost 0.3
384 TestNetworkPlugins/group/enable-default-cni/HairPin 0.21
385 TestNetworkPlugins/group/bridge/Start 79.81
386 TestNetworkPlugins/group/flannel/ControllerPod 6
387 TestNetworkPlugins/group/flannel/KubeletFlags 0.4
388 TestNetworkPlugins/group/flannel/NetCatPod 10.38
389 TestNetworkPlugins/group/flannel/DNS 0.18
390 TestNetworkPlugins/group/flannel/Localhost 0.15
391 TestNetworkPlugins/group/flannel/HairPin 0.16
392 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
393 TestNetworkPlugins/group/bridge/NetCatPod 10.28
394 TestNetworkPlugins/group/bridge/DNS 0.18
395 TestNetworkPlugins/group/bridge/Localhost 0.14
396 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (5.78s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-006565 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-006565 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.776758991s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (5.78s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0407 12:36:22.821139  878594 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0407 12:36:22.821220  878594 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20602-873072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-006565
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-006565: exit status 85 (396.087481ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-006565 | jenkins | v1.35.0 | 07 Apr 25 12:36 UTC |          |
	|         | -p download-only-006565        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 12:36:17
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 12:36:17.088251  878600 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:36:17.088370  878600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:36:17.088381  878600 out.go:358] Setting ErrFile to fd 2...
	I0407 12:36:17.088386  878600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:36:17.088743  878600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-873072/.minikube/bin
	W0407 12:36:17.088904  878600 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20602-873072/.minikube/config/config.json: open /home/jenkins/minikube-integration/20602-873072/.minikube/config/config.json: no such file or directory
	I0407 12:36:17.089338  878600 out.go:352] Setting JSON to true
	I0407 12:36:17.090405  878600 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":15521,"bootTime":1744013856,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0407 12:36:17.090482  878600 start.go:139] virtualization:  
	I0407 12:36:17.094498  878600 out.go:97] [download-only-006565] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	W0407 12:36:17.094763  878600 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20602-873072/.minikube/cache/preloaded-tarball: no such file or directory
	I0407 12:36:17.094823  878600 notify.go:220] Checking for updates...
	I0407 12:36:17.097598  878600 out.go:169] MINIKUBE_LOCATION=20602
	I0407 12:36:17.100465  878600 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:36:17.103371  878600 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20602-873072/kubeconfig
	I0407 12:36:17.106217  878600 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-873072/.minikube
	I0407 12:36:17.109058  878600 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0407 12:36:17.114713  878600 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0407 12:36:17.114990  878600 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 12:36:17.151017  878600 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0407 12:36:17.151139  878600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 12:36:17.208083  878600 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:53 SystemTime:2025-04-07 12:36:17.198274763 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Serv
erErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0407 12:36:17.208195  878600 docker.go:318] overlay module found
	I0407 12:36:17.211245  878600 out.go:97] Using the docker driver based on user configuration
	I0407 12:36:17.211292  878600 start.go:297] selected driver: docker
	I0407 12:36:17.211307  878600 start.go:901] validating driver "docker" against <nil>
	I0407 12:36:17.211438  878600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 12:36:17.272844  878600 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:53 SystemTime:2025-04-07 12:36:17.263252749 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Serv
erErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0407 12:36:17.273013  878600 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0407 12:36:17.273311  878600 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0407 12:36:17.273476  878600 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0407 12:36:17.276687  878600 out.go:169] Using Docker driver with root privileges
	I0407 12:36:17.279567  878600 cni.go:84] Creating CNI manager for ""
	I0407 12:36:17.279641  878600 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0407 12:36:17.279654  878600 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0407 12:36:17.279741  878600 start.go:340] cluster config:
	{Name:download-only-006565 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-006565 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:36:17.282763  878600 out.go:97] Starting "download-only-006565" primary control-plane node in "download-only-006565" cluster
	I0407 12:36:17.282801  878600 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0407 12:36:17.285660  878600 out.go:97] Pulling base image v0.0.46-1743675393-20591 ...
	I0407 12:36:17.285725  878600 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0407 12:36:17.285826  878600 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local docker daemon
	I0407 12:36:17.301670  878600 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 to local cache
	I0407 12:36:17.301912  878600 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local cache directory
	I0407 12:36:17.302010  878600 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 to local cache
	I0407 12:36:17.343659  878600 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0407 12:36:17.343688  878600 cache.go:56] Caching tarball of preloaded images
	I0407 12:36:17.343856  878600 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0407 12:36:17.347185  878600 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0407 12:36:17.347224  878600 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0407 12:36:17.430982  878600 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/20602-873072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0407 12:36:21.044731  878600 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0407 12:36:21.044898  878600 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20602-873072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-006565 host does not exist
	  To start a cluster, run: "minikube start -p download-only-006565"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-006565
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/json-events (4.84s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-200412 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-200412 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (4.84341425s)
--- PASS: TestDownloadOnly/v1.32.2/json-events (4.84s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/preload-exists
I0407 12:36:28.493970  878594 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
I0407 12:36:28.494013  878594 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20602-873072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-200412
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-200412: exit status 85 (84.969038ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-006565 | jenkins | v1.35.0 | 07 Apr 25 12:36 UTC |                     |
	|         | -p download-only-006565        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 07 Apr 25 12:36 UTC | 07 Apr 25 12:36 UTC |
	| delete  | -p download-only-006565        | download-only-006565 | jenkins | v1.35.0 | 07 Apr 25 12:36 UTC | 07 Apr 25 12:36 UTC |
	| start   | -o=json --download-only        | download-only-200412 | jenkins | v1.35.0 | 07 Apr 25 12:36 UTC |                     |
	|         | -p download-only-200412        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 12:36:23
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 12:36:23.700563  878803 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:36:23.700677  878803 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:36:23.700686  878803 out.go:358] Setting ErrFile to fd 2...
	I0407 12:36:23.700693  878803 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:36:23.700931  878803 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-873072/.minikube/bin
	I0407 12:36:23.701334  878803 out.go:352] Setting JSON to true
	I0407 12:36:23.702177  878803 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":15528,"bootTime":1744013856,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0407 12:36:23.702249  878803 start.go:139] virtualization:  
	I0407 12:36:23.706126  878803 out.go:97] [download-only-200412] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0407 12:36:23.706455  878803 notify.go:220] Checking for updates...
	I0407 12:36:23.709818  878803 out.go:169] MINIKUBE_LOCATION=20602
	I0407 12:36:23.713166  878803 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:36:23.716545  878803 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20602-873072/kubeconfig
	I0407 12:36:23.720371  878803 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-873072/.minikube
	I0407 12:36:23.723612  878803 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0407 12:36:23.729919  878803 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0407 12:36:23.730179  878803 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 12:36:23.762187  878803 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0407 12:36:23.762310  878803 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 12:36:23.818809  878803 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:46 SystemTime:2025-04-07 12:36:23.809628509 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Serv
erErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0407 12:36:23.818921  878803 docker.go:318] overlay module found
	I0407 12:36:23.822013  878803 out.go:97] Using the docker driver based on user configuration
	I0407 12:36:23.822062  878803 start.go:297] selected driver: docker
	I0407 12:36:23.822075  878803 start.go:901] validating driver "docker" against <nil>
	I0407 12:36:23.822185  878803 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 12:36:23.891360  878803 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:46 SystemTime:2025-04-07 12:36:23.882442799 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Serv
erErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0407 12:36:23.891553  878803 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0407 12:36:23.891838  878803 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0407 12:36:23.892005  878803 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0407 12:36:23.895152  878803 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-200412 host does not exist
	  To start a cluster, run: "minikube start -p download-only-200412"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.32.2/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-200412
--- PASS: TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.67s)

                                                
                                                
=== RUN   TestBinaryMirror
I0407 12:36:29.846609  878594 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-234626 --alsologtostderr --binary-mirror http://127.0.0.1:44523 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-234626" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-234626
--- PASS: TestBinaryMirror (0.67s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-596243
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-596243: exit status 85 (63.816127ms)

                                                
                                                
-- stdout --
	* Profile "addons-596243" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-596243"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-596243
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-596243: exit status 85 (80.136832ms)

                                                
                                                
-- stdout --
	* Profile "addons-596243" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-596243"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (217.27s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-596243 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-596243 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m37.268168592s)
--- PASS: TestAddons/Setup (217.27s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.37s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:823: volcano-controller stabilized in 66.824549ms
addons_test.go:815: volcano-admission stabilized in 67.191807ms
addons_test.go:807: volcano-scheduler stabilized in 68.44431ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-75fdd99bcf-v7m6t" [0cdd50b5-1b3e-4234-b147-270fff527af3] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.005296603s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-75d8f6b5c-zrptv" [28e8b67c-d25f-47d6-ab78-51422a3303fd] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004444993s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-86bdc5c9c-vbwx8" [d2df2b40-ba5f-49cc-9b30-3a463e26ee66] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003859958s
addons_test.go:842: (dbg) Run:  kubectl --context addons-596243 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-596243 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-596243 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [4f1e7c62-9c7b-477a-b9a2-789554e70148] Pending
helpers_test.go:344: "test-job-nginx-0" [4f1e7c62-9c7b-477a-b9a2-789554e70148] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [4f1e7c62-9c7b-477a-b9a2-789554e70148] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.002935058s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-596243 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-596243 addons disable volcano --alsologtostderr -v=1: (11.605651116s)
--- PASS: TestAddons/serial/Volcano (40.37s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-596243 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-596243 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.92s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-596243 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-596243 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c4efc736-2373-4e16-b37a-12be2e10e0b5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c4efc736-2373-4e16-b37a-12be2e10e0b5] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004394167s
addons_test.go:633: (dbg) Run:  kubectl --context addons-596243 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-596243 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-596243 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-596243 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.92s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 4.691201ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-t49v7" [6a42cfe9-72f9-4f6a-a78f-d6231b2961bf] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003049439s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-tvt87" [24ae5e5d-37b9-40db-b98e-6d194204d3f1] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003700127s
addons_test.go:331: (dbg) Run:  kubectl --context addons-596243 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-596243 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-596243 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.971460076s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-596243 ip
2025/04/07 12:41:21 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-596243 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.99s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-596243 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-596243 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-596243 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [7fdf9aa6-1c7b-4cb2-b40e-af439797b397] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [7fdf9aa6-1c7b-4cb2-b40e-af439797b397] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.01219411s
I0407 12:42:31.368439  878594 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-596243 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-596243 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-596243 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-596243 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-596243 addons disable ingress-dns --alsologtostderr -v=1: (1.392639485s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-596243 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-596243 addons disable ingress --alsologtostderr -v=1: (7.766799647s)
--- PASS: TestAddons/parallel/Ingress (19.88s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-d2b8q" [03746faa-f95e-4e0d-b1f3-b775e9f2addb] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00374349s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-596243 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-596243 addons disable inspektor-gadget --alsologtostderr -v=1: (5.998871833s)
--- PASS: TestAddons/parallel/InspektorGadget (12.00s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.82s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 3.642638ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-p6q99" [cea9cdea-e211-49a4-974b-af032dabf00c] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00386563s
addons_test.go:402: (dbg) Run:  kubectl --context addons-596243 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-596243 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.82s)

                                                
                                    
x
+
TestAddons/parallel/CSI (33.31s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0407 12:41:48.148752  878594 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0407 12:41:48.152465  878594 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0407 12:41:48.152497  878594 kapi.go:107] duration metric: took 7.012033ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 7.022799ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-596243 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-596243 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-596243 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-596243 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-596243 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-596243 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [664d4181-3165-4ce4-8b47-ad61ad24372e] Pending
helpers_test.go:344: "task-pv-pod" [664d4181-3165-4ce4-8b47-ad61ad24372e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [664d4181-3165-4ce4-8b47-ad61ad24372e] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003920496s
addons_test.go:511: (dbg) Run:  kubectl --context addons-596243 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-596243 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-596243 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-596243 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-596243 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-596243 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-596243 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-596243 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-596243 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-596243 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [3a90ae78-57ea-4b4f-aaeb-19ade2ca4228] Pending
helpers_test.go:344: "task-pv-pod-restore" [3a90ae78-57ea-4b4f-aaeb-19ade2ca4228] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [3a90ae78-57ea-4b4f-aaeb-19ade2ca4228] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003537863s
addons_test.go:553: (dbg) Run:  kubectl --context addons-596243 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-596243 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-596243 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-596243 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-596243 addons disable volumesnapshots --alsologtostderr -v=1: (1.050395134s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-596243 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-596243 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.984071296s)
--- PASS: TestAddons/parallel/CSI (33.31s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-596243 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-596243 --alsologtostderr -v=1: (1.433189692s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-5fz49" [62b4c1e6-6792-4e5c-abce-98128f80ad0e] Pending
helpers_test.go:344: "headlamp-5d4b5d7bd6-5fz49" [62b4c1e6-6792-4e5c-abce-98128f80ad0e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-5fz49" [62b4c1e6-6792-4e5c-abce-98128f80ad0e] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.004905293s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-596243 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-596243 addons disable headlamp --alsologtostderr -v=1: (5.838652277s)
--- PASS: TestAddons/parallel/Headlamp (17.28s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.91s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7dc7f9b5b8-pmzqf" [cf36c087-da37-4969-ba28-a7d743ec902f] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003969373s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-596243 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.91s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.44s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-596243 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-596243 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-596243 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-596243 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-596243 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-596243 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-596243 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [8154c3cd-c0e3-4a99-b952-f32f7d4d8612] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [8154c3cd-c0e3-4a99-b952-f32f7d4d8612] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [8154c3cd-c0e3-4a99-b952-f32f7d4d8612] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004920925s
addons_test.go:906: (dbg) Run:  kubectl --context addons-596243 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-arm64 -p addons-596243 ssh "cat /opt/local-path-provisioner/pvc-e8885e88-8c6e-448b-a8bb-0e4a22d7635b_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-596243 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-596243 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-596243 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-596243 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.909676785s)
--- PASS: TestAddons/parallel/LocalPath (52.44s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.58s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-nh9lr" [fb3c65af-8a11-47e6-aaf4-71f5e3acfb83] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004291989s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-596243 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.58s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-vf4rz" [260fa89b-faab-40e6-af5e-b39f0c9298be] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003852602s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-596243 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-596243 addons disable yakd --alsologtostderr -v=1: (5.831219701s)
--- PASS: TestAddons/parallel/Yakd (11.84s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.28s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-596243
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-596243: (11.99289491s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-596243
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-596243
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-596243
--- PASS: TestAddons/StoppedEnableDisable (12.28s)

                                                
                                    
x
+
TestCertOptions (35.36s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-839524 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-839524 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (32.602576891s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-839524 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-839524 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-839524 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-839524" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-839524
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-839524: (2.058158303s)
--- PASS: TestCertOptions (35.36s)

                                                
                                    
x
+
TestCertExpiration (226.01s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-618228 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-618228 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (37.087457969s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-618228 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-618228 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.540079476s)
helpers_test.go:175: Cleaning up "cert-expiration-618228" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-618228
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-618228: (2.37957463s)
--- PASS: TestCertExpiration (226.01s)

                                                
                                    
x
+
TestForceSystemdFlag (34.86s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-567955 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-567955 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (32.444375155s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-567955 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-567955" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-567955
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-567955: (2.103288115s)
--- PASS: TestForceSystemdFlag (34.86s)

                                                
                                    
x
+
TestForceSystemdEnv (45.69s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-913141 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-913141 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (42.23601814s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-913141 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-913141" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-913141
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-913141: (2.918794567s)
--- PASS: TestForceSystemdEnv (45.69s)

                                                
                                    
x
+
TestDockerEnvContainerd (45.97s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-414825 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-414825 --driver=docker  --container-runtime=containerd: (30.182359472s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-414825"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-pNUjB8bBdDJT/agent.900174" SSH_AGENT_PID="900175" DOCKER_HOST=ssh://docker@127.0.0.1:33883 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-pNUjB8bBdDJT/agent.900174" SSH_AGENT_PID="900175" DOCKER_HOST=ssh://docker@127.0.0.1:33883 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-pNUjB8bBdDJT/agent.900174" SSH_AGENT_PID="900175" DOCKER_HOST=ssh://docker@127.0.0.1:33883 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.21998343s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-pNUjB8bBdDJT/agent.900174" SSH_AGENT_PID="900175" DOCKER_HOST=ssh://docker@127.0.0.1:33883 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-414825" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-414825
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-414825: (2.08189343s)
--- PASS: TestDockerEnvContainerd (45.97s)

                                                
                                    
x
+
TestErrorSpam/setup (33.39s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-055024 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-055024 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-055024 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-055024 --driver=docker  --container-runtime=containerd: (33.385220068s)
--- PASS: TestErrorSpam/setup (33.39s)

                                                
                                    
x
+
TestErrorSpam/start (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-055024 --log_dir /tmp/nospam-055024 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-055024 --log_dir /tmp/nospam-055024 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-055024 --log_dir /tmp/nospam-055024 start --dry-run
--- PASS: TestErrorSpam/start (0.78s)

                                                
                                    
x
+
TestErrorSpam/status (1.12s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-055024 --log_dir /tmp/nospam-055024 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-055024 --log_dir /tmp/nospam-055024 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-055024 --log_dir /tmp/nospam-055024 status
--- PASS: TestErrorSpam/status (1.12s)

                                                
                                    
x
+
TestErrorSpam/pause (1.91s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-055024 --log_dir /tmp/nospam-055024 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-055024 --log_dir /tmp/nospam-055024 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-055024 --log_dir /tmp/nospam-055024 pause
--- PASS: TestErrorSpam/pause (1.91s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.87s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-055024 --log_dir /tmp/nospam-055024 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-055024 --log_dir /tmp/nospam-055024 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-055024 --log_dir /tmp/nospam-055024 unpause
--- PASS: TestErrorSpam/unpause (1.87s)

                                                
                                    
x
+
TestErrorSpam/stop (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-055024 --log_dir /tmp/nospam-055024 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-055024 --log_dir /tmp/nospam-055024 stop: (1.289842242s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-055024 --log_dir /tmp/nospam-055024 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-055024 --log_dir /tmp/nospam-055024 stop
--- PASS: TestErrorSpam/stop (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20602-873072/.minikube/files/etc/test/nested/copy/878594/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (62.15s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-arm64 start -p functional-062962 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E0407 12:45:07.906497  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/addons-596243/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:45:07.913250  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/addons-596243/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:45:07.924566  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/addons-596243/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:45:07.945848  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/addons-596243/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:45:07.987214  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/addons-596243/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:45:08.068616  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/addons-596243/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:45:08.230065  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/addons-596243/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:45:08.551763  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/addons-596243/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:45:09.193268  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/addons-596243/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:45:10.474748  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/addons-596243/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:45:13.036615  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/addons-596243/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:45:18.158961  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/addons-596243/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:45:28.401131  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/addons-596243/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2251: (dbg) Done: out/minikube-linux-arm64 start -p functional-062962 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m2.14821552s)
--- PASS: TestFunctional/serial/StartWithProxy (62.15s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.2s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0407 12:45:33.929802  878594 config.go:182] Loaded profile config "functional-062962": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
functional_test.go:676: (dbg) Run:  out/minikube-linux-arm64 start -p functional-062962 --alsologtostderr -v=8
functional_test.go:676: (dbg) Done: out/minikube-linux-arm64 start -p functional-062962 --alsologtostderr -v=8: (6.192950851s)
functional_test.go:680: soft start took 6.195160486s for "functional-062962" cluster.
I0407 12:45:40.123121  878594 config.go:182] Loaded profile config "functional-062962": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/SoftStart (6.20s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.08s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-062962 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-linux-arm64 -p functional-062962 cache add registry.k8s.io/pause:3.1: (1.612036481s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-arm64 -p functional-062962 cache add registry.k8s.io/pause:3.3: (1.38905191s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 cache add registry.k8s.io/pause:latest
functional_test.go:1066: (dbg) Done: out/minikube-linux-arm64 -p functional-062962 cache add registry.k8s.io/pause:latest: (1.244327526s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-062962 /tmp/TestFunctionalserialCacheCmdcacheadd_local3399134160/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 cache add minikube-local-cache-test:functional-062962
functional_test.go:1111: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 cache delete minikube-local-cache-test:functional-062962
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-062962
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-062962 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (302.152978ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 cache reload
functional_test.go:1175: (dbg) Done: out/minikube-linux-arm64 -p functional-062962 cache reload: (1.089765884s)
functional_test.go:1180: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 kubectl -- --context functional-062962 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-062962 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (44.73s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-arm64 start -p functional-062962 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0407 12:45:48.882972  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/addons-596243/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:46:29.844930  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/addons-596243/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-linux-arm64 start -p functional-062962 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.730396054s)
functional_test.go:778: restart took 44.730505578s for "functional-062962" cluster.
I0407 12:46:33.485190  878594 config.go:182] Loaded profile config "functional-062962": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/ExtraConfig (44.73s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-062962 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.79s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-arm64 -p functional-062962 logs: (1.785542691s)
--- PASS: TestFunctional/serial/LogsCmd (1.79s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.8s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 logs --file /tmp/TestFunctionalserialLogsFileCmd1081861695/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-arm64 -p functional-062962 logs --file /tmp/TestFunctionalserialLogsFileCmd1081861695/001/logs.txt: (1.79549138s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.80s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.27s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-062962 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-062962
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-062962: exit status 115 (406.910171ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32200 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-062962 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.27s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-062962 config get cpus: exit status 14 (66.081886ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-062962 config get cpus: exit status 14 (71.039109ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-062962 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-062962 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 915448: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.02s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-062962 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-062962 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (196.246095ms)

                                                
                                                
-- stdout --
	* [functional-062962] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20602
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20602-873072/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-873072/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 12:47:15.722365  915163 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:47:15.722511  915163 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:47:15.722523  915163 out.go:358] Setting ErrFile to fd 2...
	I0407 12:47:15.722528  915163 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:47:15.722792  915163 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-873072/.minikube/bin
	I0407 12:47:15.723189  915163 out.go:352] Setting JSON to false
	I0407 12:47:15.724305  915163 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":16180,"bootTime":1744013856,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0407 12:47:15.724386  915163 start.go:139] virtualization:  
	I0407 12:47:15.727638  915163 out.go:177] * [functional-062962] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0407 12:47:15.730697  915163 out.go:177]   - MINIKUBE_LOCATION=20602
	I0407 12:47:15.730733  915163 notify.go:220] Checking for updates...
	I0407 12:47:15.736513  915163 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:47:15.739441  915163 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20602-873072/kubeconfig
	I0407 12:47:15.742249  915163 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-873072/.minikube
	I0407 12:47:15.745169  915163 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0407 12:47:15.748071  915163 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 12:47:15.751573  915163 config.go:182] Loaded profile config "functional-062962": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0407 12:47:15.752178  915163 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 12:47:15.777532  915163 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0407 12:47:15.777660  915163 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 12:47:15.841396  915163 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-04-07 12:47:15.831208725 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Serv
erErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0407 12:47:15.841509  915163 docker.go:318] overlay module found
	I0407 12:47:15.844582  915163 out.go:177] * Using the docker driver based on existing profile
	I0407 12:47:15.847254  915163 start.go:297] selected driver: docker
	I0407 12:47:15.847277  915163 start.go:901] validating driver "docker" against &{Name:functional-062962 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-062962 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:47:15.847399  915163 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 12:47:15.850928  915163 out.go:201] 
	W0407 12:47:15.853939  915163 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0407 12:47:15.856802  915163 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-arm64 start -p functional-062962 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 start -p functional-062962 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-062962 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (267.499526ms)

                                                
                                                
-- stdout --
	* [functional-062962] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20602
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20602-873072/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-873072/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 12:47:15.473834  915050 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:47:15.473948  915050 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:47:15.473953  915050 out.go:358] Setting ErrFile to fd 2...
	I0407 12:47:15.473958  915050 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:47:15.474432  915050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-873072/.minikube/bin
	I0407 12:47:15.475027  915050 out.go:352] Setting JSON to false
	I0407 12:47:15.478670  915050 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":16180,"bootTime":1744013856,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0407 12:47:15.478749  915050 start.go:139] virtualization:  
	I0407 12:47:15.483598  915050 out.go:177] * [functional-062962] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	I0407 12:47:15.490071  915050 out.go:177]   - MINIKUBE_LOCATION=20602
	I0407 12:47:15.490190  915050 notify.go:220] Checking for updates...
	I0407 12:47:15.496066  915050 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:47:15.499145  915050 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20602-873072/kubeconfig
	I0407 12:47:15.501952  915050 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-873072/.minikube
	I0407 12:47:15.504781  915050 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0407 12:47:15.507939  915050 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 12:47:15.511197  915050 config.go:182] Loaded profile config "functional-062962": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0407 12:47:15.511826  915050 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 12:47:15.563292  915050 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0407 12:47:15.563426  915050 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 12:47:15.640535  915050 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-04-07 12:47:15.627152719 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Serv
erErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0407 12:47:15.640656  915050 docker.go:318] overlay module found
	I0407 12:47:15.644273  915050 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0407 12:47:15.647330  915050 start.go:297] selected driver: docker
	I0407 12:47:15.647376  915050 start.go:901] validating driver "docker" against &{Name:functional-062962 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-062962 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:47:15.647488  915050 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 12:47:15.651132  915050 out.go:201] 
	W0407 12:47:15.653988  915050 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0407 12:47:15.659120  915050 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1644: (dbg) Run:  kubectl --context functional-062962 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-062962 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-8449669db6-dc52v" [19ae438a-63c2-46cd-8560-bf153a8ecd3d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-8449669db6-dc52v" [19ae438a-63c2-46cd-8560-bf153a8ecd3d] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.003997721s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.49.2:30648
functional_test.go:1692: http://192.168.49.2:30648: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-8449669db6-dc52v

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30648
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.73s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [bc6cdc6a-0f2b-430c-a99c-ac3010ebe86e] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.002553413s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-062962 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-062962 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-062962 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-062962 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [515c2f8f-be8e-4de2-a0e2-25a0b9ee1754] Pending
helpers_test.go:344: "sp-pod" [515c2f8f-be8e-4de2-a0e2-25a0b9ee1754] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [515c2f8f-be8e-4de2-a0e2-25a0b9ee1754] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004028662s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-062962 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-062962 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-062962 delete -f testdata/storage-provisioner/pod.yaml: (1.19071724s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-062962 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7c4c6fe5-70d7-4a04-9063-92915abffac3] Pending
helpers_test.go:344: "sp-pod" [7c4c6fe5-70d7-4a04-9063-92915abffac3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [7c4c6fe5-70d7-4a04-9063-92915abffac3] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003843818s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-062962 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.24s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 ssh -n functional-062962 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 cp functional-062962:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3054047221/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 ssh -n functional-062962 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 ssh -n functional-062962 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.49s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/878594/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 ssh "sudo cat /etc/test/nested/copy/878594/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/878594.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 ssh "sudo cat /etc/ssl/certs/878594.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/878594.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 ssh "sudo cat /usr/share/ca-certificates/878594.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/8785942.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 ssh "sudo cat /etc/ssl/certs/8785942.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/8785942.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 ssh "sudo cat /usr/share/ca-certificates/8785942.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-062962 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 ssh "sudo systemctl is-active docker"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-062962 ssh "sudo systemctl is-active docker": exit status 1 (273.108524ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 ssh "sudo systemctl is-active crio"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-062962 ssh "sudo systemctl is-active crio": exit status 1 (266.884856ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-062962 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-062962 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-062962 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 912673: os: process already finished
helpers_test.go:508: unable to kill pid 912483: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-062962 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-062962 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-062962 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [aefa6fa2-4737-439b-b464-e674e976712a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [aefa6fa2-4737-439b-b464-e674e976712a] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003095626s
I0407 12:46:52.921860  878594 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-062962 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.177.254 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-062962 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1454: (dbg) Run:  kubectl --context functional-062962 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-062962 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64fc58db8c-nggm5" [645c51f6-278f-43de-bee7-ead92eafeb09] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64fc58db8c-nggm5" [645c51f6-278f-43de-bee7-ead92eafeb09] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003801012s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1332: Took "445.366445ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1346: Took "91.887085ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 service list -o json
functional_test.go:1511: Took "638.286311ms" to run "out/minikube-linux-arm64 -p functional-062962 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1383: Took "480.698063ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1396: Took "103.543558ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.49.2:32060
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-062962 /tmp/TestFunctionalparallelMountCmdany-port1126749045/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1744030032778229807" to /tmp/TestFunctionalparallelMountCmdany-port1126749045/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1744030032778229807" to /tmp/TestFunctionalparallelMountCmdany-port1126749045/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1744030032778229807" to /tmp/TestFunctionalparallelMountCmdany-port1126749045/001/test-1744030032778229807
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-062962 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (447.30365ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0407 12:47:13.226841  878594 retry.go:31] will retry after 575.342928ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr  7 12:47 created-by-test
-rw-r--r-- 1 docker docker 24 Apr  7 12:47 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr  7 12:47 test-1744030032778229807
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 ssh cat /mount-9p/test-1744030032778229807
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-062962 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [dbf88b50-95e7-4f54-8b77-b9685bc819de] Pending
helpers_test.go:344: "busybox-mount" [dbf88b50-95e7-4f54-8b77-b9685bc819de] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [dbf88b50-95e7-4f54-8b77-b9685bc819de] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [dbf88b50-95e7-4f54-8b77-b9685bc819de] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004464938s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-062962 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-062962 /tmp/TestFunctionalparallelMountCmdany-port1126749045/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.49.2:32060
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-062962 /tmp/TestFunctionalparallelMountCmdspecific-port3127393176/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-062962 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (379.200431ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0407 12:47:21.701991  878594 retry.go:31] will retry after 740.083002ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-062962 /tmp/TestFunctionalparallelMountCmdspecific-port3127393176/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-062962 ssh "sudo umount -f /mount-9p": exit status 1 (345.996971ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-062962 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-062962 /tmp/TestFunctionalparallelMountCmdspecific-port3127393176/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-062962 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3467562722/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-062962 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3467562722/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-062962 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3467562722/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-062962 ssh "findmnt -T" /mount1: exit status 1 (969.519129ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0407 12:47:24.715657  878594 retry.go:31] will retry after 347.962058ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-062962 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-062962 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3467562722/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-062962 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3467562722/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-062962 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3467562722/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.40s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 version -o=json --components
functional_test.go:2287: (dbg) Done: out/minikube-linux-arm64 -p functional-062962 version -o=json --components: (1.414242266s)
--- PASS: TestFunctional/parallel/Version/components (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-062962 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.2
registry.k8s.io/kube-proxy:v1.32.2
registry.k8s.io/kube-controller-manager:v1.32.2
registry.k8s.io/kube-apiserver:v1.32.2
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-062962
docker.io/kindest/kindnetd:v20250214-acbabc1a
docker.io/kindest/kindnetd:v20241212-9f82dd49
docker.io/kicbase/echo-server:functional-062962
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-062962 image ls --format short --alsologtostderr:
I0407 12:47:34.582217  918057 out.go:345] Setting OutFile to fd 1 ...
I0407 12:47:34.582471  918057 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:47:34.582500  918057 out.go:358] Setting ErrFile to fd 2...
I0407 12:47:34.582520  918057 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:47:34.582802  918057 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-873072/.minikube/bin
I0407 12:47:34.583569  918057 config.go:182] Loaded profile config "functional-062962": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0407 12:47:34.583756  918057 config.go:182] Loaded profile config "functional-062962": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0407 12:47:34.584245  918057 cli_runner.go:164] Run: docker container inspect functional-062962 --format={{.State.Status}}
I0407 12:47:34.618707  918057 ssh_runner.go:195] Run: systemctl --version
I0407 12:47:34.618769  918057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-062962
I0407 12:47:34.641093  918057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/functional-062962/id_rsa Username:docker}
I0407 12:47:34.746169  918057 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-062962 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| registry.k8s.io/kube-apiserver              | v1.32.2            | sha256:6417e1 | 26.2MB |
| registry.k8s.io/etcd                        | 3.5.16-0           | sha256:7fc9d4 | 67.9MB |
| registry.k8s.io/kube-proxy                  | v1.32.2            | sha256:e5aac5 | 27.4MB |
| docker.io/kindest/kindnetd                  | v20241212-9f82dd49 | sha256:e1181e | 35.7MB |
| docker.io/library/nginx                     | latest             | sha256:2c9168 | 68.6MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:2f6c96 | 16.9MB |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/library/minikube-local-cache-test | functional-062962  | sha256:2fdaad | 990B   |
| docker.io/kicbase/echo-server               | functional-062962  | sha256:ce2d2c | 2.17MB |
| docker.io/kindest/kindnetd                  | v20250214-acbabc1a | sha256:ee75e2 | 35.7MB |
| docker.io/library/nginx                     | alpine             | sha256:cedb66 | 21.7MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/kube-controller-manager     | v1.32.2            | sha256:3c9285 | 24MB   |
| registry.k8s.io/kube-scheduler              | v1.32.2            | sha256:82dfa0 | 18.9MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-062962 image ls --format table --alsologtostderr:
I0407 12:47:35.294445  918246 out.go:345] Setting OutFile to fd 1 ...
I0407 12:47:35.294700  918246 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:47:35.294711  918246 out.go:358] Setting ErrFile to fd 2...
I0407 12:47:35.294717  918246 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:47:35.295032  918246 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-873072/.minikube/bin
I0407 12:47:35.295729  918246 config.go:182] Loaded profile config "functional-062962": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0407 12:47:35.295910  918246 config.go:182] Loaded profile config "functional-062962": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0407 12:47:35.296404  918246 cli_runner.go:164] Run: docker container inspect functional-062962 --format={{.State.Status}}
I0407 12:47:35.323577  918246 ssh_runner.go:195] Run: systemctl --version
I0407 12:47:35.323632  918246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-062962
I0407 12:47:35.350625  918246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/functional-062962/id_rsa Username:docker}
I0407 12:47:35.438699  918246 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-062962 image ls --format json --alsologtostderr:
[{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:2c9168b3c9a84851f91e03534dc4136951e9f581ab3ac8ee38b28b49ad57ba38","repoDigests":["docker.io/library/nginx@sha256:124b44bfc9ccd1f3cedf4b592d4d1e8bddb78b51ec2ed5056c52d3692baebc19"],"repoTags":["doc
ker.io/library/nginx:latest"],"size":"68634448"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:2fdaad110ad6e5422c47004d0e05967879dc3953ab83f135b9ba941b59020349","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-062962"],"size":"990"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062","repoDigests":["registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.2"],"size":"27362401"},{"id":"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32","repoDigests":["
registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.2"],"size":"26215036"},{"id":"sha256:e1181ee320546c66f17956a302db1b7899d88a593f116726718851133de588b6","repoDigests":["docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26"],"repoTags":["docker.io/kindest/kindnetd:v20241212-9f82dd49"],"size":"35679862"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:cedb667e1a7b4e6d843a4f74f1f2db0dac1c29b43978aa72dbae2193e3b8eea3","repoDigests":["docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591"],"repoTags":["docker.io/library/nginx:alpine"],"size":"21684747"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a
8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"16948420"},{"id":"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90"],"repoTags":["registry.k8s.io/kube-controller-manager:v
1.32.2"],"size":"23968941"},{"id":"sha256:ee75e27fff91c8d59835f9a3efdf968ff404e580bad69746a65bcf3e304ab26f","repoDigests":["docker.io/kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495"],"repoTags":["docker.io/kindest/kindnetd:v20250214-acbabc1a"],"size":"35677907"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-062962"],"size":"2173567"},{"id":"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82","repoDigests":["registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"67941650"},{"id":"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911","repoDigests":["registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.2"],"size":"18921614"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-062962 image ls --format json --alsologtostderr:
I0407 12:47:34.991748  918161 out.go:345] Setting OutFile to fd 1 ...
I0407 12:47:34.991858  918161 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:47:34.991863  918161 out.go:358] Setting ErrFile to fd 2...
I0407 12:47:34.991869  918161 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:47:34.992888  918161 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-873072/.minikube/bin
I0407 12:47:34.993663  918161 config.go:182] Loaded profile config "functional-062962": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0407 12:47:34.993801  918161 config.go:182] Loaded profile config "functional-062962": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0407 12:47:34.994251  918161 cli_runner.go:164] Run: docker container inspect functional-062962 --format={{.State.Status}}
I0407 12:47:35.018211  918161 ssh_runner.go:195] Run: systemctl --version
I0407 12:47:35.018347  918161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-062962
I0407 12:47:35.050541  918161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/functional-062962/id_rsa Username:docker}
I0407 12:47:35.151130  918161 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-062962 image ls --format yaml --alsologtostderr:
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:2c9168b3c9a84851f91e03534dc4136951e9f581ab3ac8ee38b28b49ad57ba38
repoDigests:
- docker.io/library/nginx@sha256:124b44bfc9ccd1f3cedf4b592d4d1e8bddb78b51ec2ed5056c52d3692baebc19
repoTags:
- docker.io/library/nginx:latest
size: "68634448"
- id: sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82
repoDigests:
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "67941650"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:ee75e27fff91c8d59835f9a3efdf968ff404e580bad69746a65bcf3e304ab26f
repoDigests:
- docker.io/kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495
repoTags:
- docker.io/kindest/kindnetd:v20250214-acbabc1a
size: "35677907"
- id: sha256:2fdaad110ad6e5422c47004d0e05967879dc3953ab83f135b9ba941b59020349
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-062962
size: "990"
- id: sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062
repoDigests:
- registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d
repoTags:
- registry.k8s.io/kube-proxy:v1.32.2
size: "27362401"
- id: sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.2
size: "18921614"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-062962
size: "2173567"
- id: sha256:e1181ee320546c66f17956a302db1b7899d88a593f116726718851133de588b6
repoDigests:
- docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26
repoTags:
- docker.io/kindest/kindnetd:v20241212-9f82dd49
size: "35679862"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:cedb667e1a7b4e6d843a4f74f1f2db0dac1c29b43978aa72dbae2193e3b8eea3
repoDigests:
- docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591
repoTags:
- docker.io/library/nginx:alpine
size: "21684747"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "16948420"
- id: sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.2
size: "26215036"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.2
size: "23968941"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-062962 image ls --format yaml --alsologtostderr:
I0407 12:47:34.699451  918103 out.go:345] Setting OutFile to fd 1 ...
I0407 12:47:34.699596  918103 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:47:34.699608  918103 out.go:358] Setting ErrFile to fd 2...
I0407 12:47:34.699627  918103 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:47:34.700003  918103 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-873072/.minikube/bin
I0407 12:47:34.700980  918103 config.go:182] Loaded profile config "functional-062962": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0407 12:47:34.701164  918103 config.go:182] Loaded profile config "functional-062962": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0407 12:47:34.701668  918103 cli_runner.go:164] Run: docker container inspect functional-062962 --format={{.State.Status}}
I0407 12:47:34.719234  918103 ssh_runner.go:195] Run: systemctl --version
I0407 12:47:34.719293  918103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-062962
I0407 12:47:34.737593  918103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/functional-062962/id_rsa Username:docker}
I0407 12:47:34.834493  918103 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-062962 ssh pgrep buildkitd: exit status 1 (353.988162ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 image build -t localhost/my-image:functional-062962 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-arm64 -p functional-062962 image build -t localhost/my-image:functional-062962 testdata/build --alsologtostderr: (4.783743356s)
functional_test.go:340: (dbg) Stderr: out/minikube-linux-arm64 -p functional-062962 image build -t localhost/my-image:functional-062962 testdata/build --alsologtostderr:
I0407 12:47:35.242071  918237 out.go:345] Setting OutFile to fd 1 ...
I0407 12:47:35.242740  918237 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:47:35.242780  918237 out.go:358] Setting ErrFile to fd 2...
I0407 12:47:35.242801  918237 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:47:35.243274  918237 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-873072/.minikube/bin
I0407 12:47:35.244086  918237 config.go:182] Loaded profile config "functional-062962": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0407 12:47:35.245659  918237 config.go:182] Loaded profile config "functional-062962": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0407 12:47:35.246226  918237 cli_runner.go:164] Run: docker container inspect functional-062962 --format={{.State.Status}}
I0407 12:47:35.278054  918237 ssh_runner.go:195] Run: systemctl --version
I0407 12:47:35.278105  918237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-062962
I0407 12:47:35.300350  918237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/functional-062962/id_rsa Username:docker}
I0407 12:47:35.398285  918237 build_images.go:161] Building image from path: /tmp/build.410953075.tar
I0407 12:47:35.398388  918237 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0407 12:47:35.408461  918237 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.410953075.tar
I0407 12:47:35.412135  918237 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.410953075.tar: stat -c "%s %y" /var/lib/minikube/build/build.410953075.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.410953075.tar': No such file or directory
I0407 12:47:35.412161  918237 ssh_runner.go:362] scp /tmp/build.410953075.tar --> /var/lib/minikube/build/build.410953075.tar (3072 bytes)
I0407 12:47:35.442352  918237 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.410953075
I0407 12:47:35.460646  918237 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.410953075 -xf /var/lib/minikube/build/build.410953075.tar
I0407 12:47:35.474698  918237 containerd.go:394] Building image: /var/lib/minikube/build/build.410953075
I0407 12:47:35.474780  918237 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.410953075 --local dockerfile=/var/lib/minikube/build/build.410953075 --output type=image,name=localhost/my-image:functional-062962
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 2.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.8s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 1.0s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:18875702fc3e5eed2920f20397c46fc983d4dce44676b0076283040e97e5f355 0.0s done
#8 exporting config sha256:84242a8339d39be9de64d4396d239e19a44b4e0bb0837d4e2a1f158e3d4078b3 0.0s done
#8 naming to localhost/my-image:functional-062962 done
#8 DONE 0.2s
I0407 12:47:39.910414  918237 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.410953075 --local dockerfile=/var/lib/minikube/build/build.410953075 --output type=image,name=localhost/my-image:functional-062962: (4.435608773s)
I0407 12:47:39.910484  918237 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.410953075
I0407 12:47:39.919555  918237 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.410953075.tar
I0407 12:47:39.930121  918237 build_images.go:217] Built localhost/my-image:functional-062962 from /tmp/build.410953075.tar
I0407 12:47:39.930157  918237 build_images.go:133] succeeded building to: functional-062962
I0407 12:47:39.930163  918237 build_images.go:134] failed building to: 
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-062962
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 image load --daemon kicbase/echo-server:functional-062962 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 image load --daemon kicbase/echo-server:functional-062962 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 image ls
2025/04/07 12:47:29 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-062962
functional_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 image load --daemon kicbase/echo-server:functional-062962 --alsologtostderr
functional_test.go:262: (dbg) Done: out/minikube-linux-arm64 -p functional-062962 image load --daemon kicbase/echo-server:functional-062962 --alsologtostderr: (1.122239554s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 image save kicbase/echo-server:functional-062962 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 image rm kicbase/echo-server:functional-062962 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-062962
functional_test.go:441: (dbg) Run:  out/minikube-linux-arm64 -p functional-062962 image save --daemon kicbase/echo-server:functional-062962 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect kicbase/echo-server:functional-062962
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.46s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-062962
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-062962
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-062962
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (122.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-820011 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0407 12:47:51.767201  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/addons-596243/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-820011 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m1.08477755s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (122.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (33.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-820011 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-820011 -- rollout status deployment/busybox
E0407 12:50:07.905071  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/addons-596243/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-820011 -- rollout status deployment/busybox: (30.487569691s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-820011 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-820011 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-820011 -- exec busybox-58667487b6-6j2tp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-820011 -- exec busybox-58667487b6-mdcdc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-820011 -- exec busybox-58667487b6-q9zqw -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-820011 -- exec busybox-58667487b6-6j2tp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-820011 -- exec busybox-58667487b6-mdcdc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-820011 -- exec busybox-58667487b6-q9zqw -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-820011 -- exec busybox-58667487b6-6j2tp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-820011 -- exec busybox-58667487b6-mdcdc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-820011 -- exec busybox-58667487b6-q9zqw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (33.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-820011 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-820011 -- exec busybox-58667487b6-6j2tp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-820011 -- exec busybox-58667487b6-6j2tp -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-820011 -- exec busybox-58667487b6-mdcdc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-820011 -- exec busybox-58667487b6-mdcdc -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-820011 -- exec busybox-58667487b6-q9zqw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-820011 -- exec busybox-58667487b6-q9zqw -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (21.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-820011 -v=7 --alsologtostderr
E0407 12:50:35.609312  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/addons-596243/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-820011 -v=7 --alsologtostderr: (20.072644072s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (21.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-820011 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.119383046s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 status --output json -v=7 --alsologtostderr
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-820011 status --output json -v=7 --alsologtostderr: (1.000245314s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 cp testdata/cp-test.txt ha-820011:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 ssh -n ha-820011 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 cp ha-820011:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile437169691/001/cp-test_ha-820011.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 ssh -n ha-820011 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 cp ha-820011:/home/docker/cp-test.txt ha-820011-m02:/home/docker/cp-test_ha-820011_ha-820011-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 ssh -n ha-820011 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 ssh -n ha-820011-m02 "sudo cat /home/docker/cp-test_ha-820011_ha-820011-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 cp ha-820011:/home/docker/cp-test.txt ha-820011-m03:/home/docker/cp-test_ha-820011_ha-820011-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 ssh -n ha-820011 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 ssh -n ha-820011-m03 "sudo cat /home/docker/cp-test_ha-820011_ha-820011-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 cp ha-820011:/home/docker/cp-test.txt ha-820011-m04:/home/docker/cp-test_ha-820011_ha-820011-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 ssh -n ha-820011 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 ssh -n ha-820011-m04 "sudo cat /home/docker/cp-test_ha-820011_ha-820011-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 cp testdata/cp-test.txt ha-820011-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 ssh -n ha-820011-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 cp ha-820011-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile437169691/001/cp-test_ha-820011-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 ssh -n ha-820011-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 cp ha-820011-m02:/home/docker/cp-test.txt ha-820011:/home/docker/cp-test_ha-820011-m02_ha-820011.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 ssh -n ha-820011-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 ssh -n ha-820011 "sudo cat /home/docker/cp-test_ha-820011-m02_ha-820011.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 cp ha-820011-m02:/home/docker/cp-test.txt ha-820011-m03:/home/docker/cp-test_ha-820011-m02_ha-820011-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 ssh -n ha-820011-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 ssh -n ha-820011-m03 "sudo cat /home/docker/cp-test_ha-820011-m02_ha-820011-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 cp ha-820011-m02:/home/docker/cp-test.txt ha-820011-m04:/home/docker/cp-test_ha-820011-m02_ha-820011-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 ssh -n ha-820011-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 ssh -n ha-820011-m04 "sudo cat /home/docker/cp-test_ha-820011-m02_ha-820011-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 cp testdata/cp-test.txt ha-820011-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 ssh -n ha-820011-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 cp ha-820011-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile437169691/001/cp-test_ha-820011-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 ssh -n ha-820011-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 cp ha-820011-m03:/home/docker/cp-test.txt ha-820011:/home/docker/cp-test_ha-820011-m03_ha-820011.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 ssh -n ha-820011-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 ssh -n ha-820011 "sudo cat /home/docker/cp-test_ha-820011-m03_ha-820011.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 cp ha-820011-m03:/home/docker/cp-test.txt ha-820011-m02:/home/docker/cp-test_ha-820011-m03_ha-820011-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 ssh -n ha-820011-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 ssh -n ha-820011-m02 "sudo cat /home/docker/cp-test_ha-820011-m03_ha-820011-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 cp ha-820011-m03:/home/docker/cp-test.txt ha-820011-m04:/home/docker/cp-test_ha-820011-m03_ha-820011-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 ssh -n ha-820011-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 ssh -n ha-820011-m04 "sudo cat /home/docker/cp-test_ha-820011-m03_ha-820011-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 cp testdata/cp-test.txt ha-820011-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 ssh -n ha-820011-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 cp ha-820011-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile437169691/001/cp-test_ha-820011-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 ssh -n ha-820011-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 cp ha-820011-m04:/home/docker/cp-test.txt ha-820011:/home/docker/cp-test_ha-820011-m04_ha-820011.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 ssh -n ha-820011-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 ssh -n ha-820011 "sudo cat /home/docker/cp-test_ha-820011-m04_ha-820011.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 cp ha-820011-m04:/home/docker/cp-test.txt ha-820011-m02:/home/docker/cp-test_ha-820011-m04_ha-820011-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 ssh -n ha-820011-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 ssh -n ha-820011-m02 "sudo cat /home/docker/cp-test_ha-820011-m04_ha-820011-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 cp ha-820011-m04:/home/docker/cp-test.txt ha-820011-m03:/home/docker/cp-test_ha-820011-m04_ha-820011-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 ssh -n ha-820011-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 ssh -n ha-820011-m03 "sudo cat /home/docker/cp-test_ha-820011-m04_ha-820011-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-820011 node stop m02 -v=7 --alsologtostderr: (12.111818668s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-820011 status -v=7 --alsologtostderr: exit status 7 (792.42872ms)

                                                
                                                
-- stdout --
	ha-820011
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-820011-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-820011-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-820011-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 12:51:15.107733  934931 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:51:15.107888  934931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:51:15.107893  934931 out.go:358] Setting ErrFile to fd 2...
	I0407 12:51:15.107899  934931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:51:15.108215  934931 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-873072/.minikube/bin
	I0407 12:51:15.108431  934931 out.go:352] Setting JSON to false
	I0407 12:51:15.108474  934931 mustload.go:65] Loading cluster: ha-820011
	I0407 12:51:15.108523  934931 notify.go:220] Checking for updates...
	I0407 12:51:15.108975  934931 config.go:182] Loaded profile config "ha-820011": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0407 12:51:15.108998  934931 status.go:174] checking status of ha-820011 ...
	I0407 12:51:15.109938  934931 cli_runner.go:164] Run: docker container inspect ha-820011 --format={{.State.Status}}
	I0407 12:51:15.133377  934931 status.go:371] ha-820011 host status = "Running" (err=<nil>)
	I0407 12:51:15.133412  934931 host.go:66] Checking if "ha-820011" exists ...
	I0407 12:51:15.133790  934931 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-820011
	I0407 12:51:15.176130  934931 host.go:66] Checking if "ha-820011" exists ...
	I0407 12:51:15.176462  934931 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 12:51:15.176521  934931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-820011
	I0407 12:51:15.204932  934931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/ha-820011/id_rsa Username:docker}
	I0407 12:51:15.295221  934931 ssh_runner.go:195] Run: systemctl --version
	I0407 12:51:15.300302  934931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 12:51:15.313930  934931 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 12:51:15.377011  934931 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-04-07 12:51:15.366463787 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Serv
erErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0407 12:51:15.377589  934931 kubeconfig.go:125] found "ha-820011" server: "https://192.168.49.254:8443"
	I0407 12:51:15.377627  934931 api_server.go:166] Checking apiserver status ...
	I0407 12:51:15.377673  934931 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:51:15.390013  934931 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1566/cgroup
	I0407 12:51:15.400138  934931 api_server.go:182] apiserver freezer: "4:freezer:/docker/a2a3d729d4ea5d2f6664ba7e6b9767695146c985fa5eb82232b00de344af34e6/kubepods/burstable/podb5769455a31d9c09b298b0e5ab40cf89/be0527e36ac0a7e66f72505325f1fbd632d225d16a3934fb3c536a1f9504ce84"
	I0407 12:51:15.400212  934931 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a2a3d729d4ea5d2f6664ba7e6b9767695146c985fa5eb82232b00de344af34e6/kubepods/burstable/podb5769455a31d9c09b298b0e5ab40cf89/be0527e36ac0a7e66f72505325f1fbd632d225d16a3934fb3c536a1f9504ce84/freezer.state
	I0407 12:51:15.409790  934931 api_server.go:204] freezer state: "THAWED"
	I0407 12:51:15.409825  934931 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0407 12:51:15.420552  934931 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0407 12:51:15.420627  934931 status.go:463] ha-820011 apiserver status = Running (err=<nil>)
	I0407 12:51:15.420646  934931 status.go:176] ha-820011 status: &{Name:ha-820011 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 12:51:15.420666  934931 status.go:174] checking status of ha-820011-m02 ...
	I0407 12:51:15.420997  934931 cli_runner.go:164] Run: docker container inspect ha-820011-m02 --format={{.State.Status}}
	I0407 12:51:15.439952  934931 status.go:371] ha-820011-m02 host status = "Stopped" (err=<nil>)
	I0407 12:51:15.439978  934931 status.go:384] host is not running, skipping remaining checks
	I0407 12:51:15.439986  934931 status.go:176] ha-820011-m02 status: &{Name:ha-820011-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 12:51:15.440007  934931 status.go:174] checking status of ha-820011-m03 ...
	I0407 12:51:15.440339  934931 cli_runner.go:164] Run: docker container inspect ha-820011-m03 --format={{.State.Status}}
	I0407 12:51:15.458797  934931 status.go:371] ha-820011-m03 host status = "Running" (err=<nil>)
	I0407 12:51:15.458825  934931 host.go:66] Checking if "ha-820011-m03" exists ...
	I0407 12:51:15.459203  934931 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-820011-m03
	I0407 12:51:15.479662  934931 host.go:66] Checking if "ha-820011-m03" exists ...
	I0407 12:51:15.479988  934931 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 12:51:15.480043  934931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-820011-m03
	I0407 12:51:15.500338  934931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/ha-820011-m03/id_rsa Username:docker}
	I0407 12:51:15.595416  934931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 12:51:15.607888  934931 kubeconfig.go:125] found "ha-820011" server: "https://192.168.49.254:8443"
	I0407 12:51:15.607919  934931 api_server.go:166] Checking apiserver status ...
	I0407 12:51:15.608001  934931 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:51:15.621018  934931 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1357/cgroup
	I0407 12:51:15.635287  934931 api_server.go:182] apiserver freezer: "4:freezer:/docker/56a203790108c959f8001950f0acb33be21e1b88c20ea28b12fff365d17d38b9/kubepods/burstable/pod71d601d87616ed699c9472d235346c25/3b4896ed404f78fbb9959f256dec2959a39902c56361a33aa520f574fe98c96b"
	I0407 12:51:15.635366  934931 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/56a203790108c959f8001950f0acb33be21e1b88c20ea28b12fff365d17d38b9/kubepods/burstable/pod71d601d87616ed699c9472d235346c25/3b4896ed404f78fbb9959f256dec2959a39902c56361a33aa520f574fe98c96b/freezer.state
	I0407 12:51:15.645227  934931 api_server.go:204] freezer state: "THAWED"
	I0407 12:51:15.645258  934931 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0407 12:51:15.653295  934931 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0407 12:51:15.653327  934931 status.go:463] ha-820011-m03 apiserver status = Running (err=<nil>)
	I0407 12:51:15.653337  934931 status.go:176] ha-820011-m03 status: &{Name:ha-820011-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 12:51:15.653354  934931 status.go:174] checking status of ha-820011-m04 ...
	I0407 12:51:15.653674  934931 cli_runner.go:164] Run: docker container inspect ha-820011-m04 --format={{.State.Status}}
	I0407 12:51:15.672033  934931 status.go:371] ha-820011-m04 host status = "Running" (err=<nil>)
	I0407 12:51:15.672063  934931 host.go:66] Checking if "ha-820011-m04" exists ...
	I0407 12:51:15.672414  934931 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-820011-m04
	I0407 12:51:15.691608  934931 host.go:66] Checking if "ha-820011-m04" exists ...
	I0407 12:51:15.691923  934931 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 12:51:15.691985  934931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-820011-m04
	I0407 12:51:15.714542  934931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33913 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/ha-820011-m04/id_rsa Username:docker}
	I0407 12:51:15.804101  934931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 12:51:15.818700  934931 status.go:176] ha-820011-m04 status: &{Name:ha-820011-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (18.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-820011 node start m02 -v=7 --alsologtostderr: (17.319532943s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-820011 status -v=7 --alsologtostderr: (1.047363779s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (18.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.019661023s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (135.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-820011 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-820011 -v=7 --alsologtostderr
E0407 12:51:43.456438  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/functional-062962/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:51:43.462811  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/functional-062962/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:51:43.474187  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/functional-062962/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:51:43.495670  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/functional-062962/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:51:43.537100  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/functional-062962/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:51:43.618516  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/functional-062962/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:51:43.780079  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/functional-062962/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:51:44.101790  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/functional-062962/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:51:44.743859  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/functional-062962/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:51:46.025456  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/functional-062962/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:51:48.587213  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/functional-062962/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:51:53.709518  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/functional-062962/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:52:03.951357  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/functional-062962/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-820011 -v=7 --alsologtostderr: (37.205336527s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-820011 --wait=true -v=7 --alsologtostderr
E0407 12:52:24.432764  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/functional-062962/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:53:05.394428  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/functional-062962/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-820011 --wait=true -v=7 --alsologtostderr: (1m38.580670423s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-820011
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (135.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-820011 node delete m03 -v=7 --alsologtostderr: (9.787138478s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 stop -v=7 --alsologtostderr
E0407 12:54:27.317918  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/functional-062962/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-820011 stop -v=7 --alsologtostderr: (35.999864798s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-820011 status -v=7 --alsologtostderr: exit status 7 (109.889668ms)

                                                
                                                
-- stdout --
	ha-820011
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-820011-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-820011-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 12:54:39.616114  949912 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:54:39.616256  949912 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:54:39.616282  949912 out.go:358] Setting ErrFile to fd 2...
	I0407 12:54:39.616305  949912 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:54:39.616578  949912 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-873072/.minikube/bin
	I0407 12:54:39.616796  949912 out.go:352] Setting JSON to false
	I0407 12:54:39.616851  949912 mustload.go:65] Loading cluster: ha-820011
	I0407 12:54:39.616932  949912 notify.go:220] Checking for updates...
	I0407 12:54:39.617372  949912 config.go:182] Loaded profile config "ha-820011": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0407 12:54:39.617396  949912 status.go:174] checking status of ha-820011 ...
	I0407 12:54:39.618223  949912 cli_runner.go:164] Run: docker container inspect ha-820011 --format={{.State.Status}}
	I0407 12:54:39.636005  949912 status.go:371] ha-820011 host status = "Stopped" (err=<nil>)
	I0407 12:54:39.636031  949912 status.go:384] host is not running, skipping remaining checks
	I0407 12:54:39.636040  949912 status.go:176] ha-820011 status: &{Name:ha-820011 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 12:54:39.636063  949912 status.go:174] checking status of ha-820011-m02 ...
	I0407 12:54:39.636375  949912 cli_runner.go:164] Run: docker container inspect ha-820011-m02 --format={{.State.Status}}
	I0407 12:54:39.659374  949912 status.go:371] ha-820011-m02 host status = "Stopped" (err=<nil>)
	I0407 12:54:39.659402  949912 status.go:384] host is not running, skipping remaining checks
	I0407 12:54:39.659409  949912 status.go:176] ha-820011-m02 status: &{Name:ha-820011-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 12:54:39.659434  949912 status.go:174] checking status of ha-820011-m04 ...
	I0407 12:54:39.659730  949912 cli_runner.go:164] Run: docker container inspect ha-820011-m04 --format={{.State.Status}}
	I0407 12:54:39.677577  949912 status.go:371] ha-820011-m04 host status = "Stopped" (err=<nil>)
	I0407 12:54:39.677598  949912 status.go:384] host is not running, skipping remaining checks
	I0407 12:54:39.677605  949912 status.go:176] ha-820011-m04 status: &{Name:ha-820011-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (63.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-820011 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0407 12:55:07.903978  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/addons-596243/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-820011 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m2.363141421s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (63.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (46.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-820011 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-820011 --control-plane -v=7 --alsologtostderr: (45.317070424s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-820011 status -v=7 --alsologtostderr
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-820011 status -v=7 --alsologtostderr: (1.07905462s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (46.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.044287414s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.04s)

                                                
                                    
x
+
TestJSONOutput/start/Command (49.86s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-824390 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0407 12:56:43.455818  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/functional-062962/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:57:11.160387  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/functional-062962/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-824390 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (49.853652745s)
--- PASS: TestJSONOutput/start/Command (49.86s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.78s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-824390 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.78s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-824390 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.84s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-824390 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-824390 --output=json --user=testUser: (5.837006665s)
--- PASS: TestJSONOutput/stop/Command (5.84s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.26s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-462814 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-462814 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (102.641292ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"806cfcba-88fe-4891-970d-6985499a5798","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-462814] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"30c5fc53-d334-4bd3-b5f9-3aa0c0bc986c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20602"}}
	{"specversion":"1.0","id":"abc6a020-0a28-462f-a12a-0216a5e676a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7f0daf61-42d4-443c-a072-84c8a7e1df4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20602-873072/kubeconfig"}}
	{"specversion":"1.0","id":"9bab4bc3-78b6-423e-bb5b-0ab0fc958599","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-873072/.minikube"}}
	{"specversion":"1.0","id":"1c739bb4-05f8-4042-9174-a6947ae31aaa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"a00c31fe-9730-40a9-a33a-e1f8a06bc3e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7facccef-3799-4029-a8bc-2beb198988e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-462814" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-462814
--- PASS: TestErrorJSONOutput (0.26s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.29s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-107365 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-107365 --network=: (36.066173503s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-107365" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-107365
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-107365: (2.19803497s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.29s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.35s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-250610 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-250610 --network=bridge: (32.23720126s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-250610" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-250610
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-250610: (2.083595098s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.35s)

                                                
                                    
x
+
TestKicExistingNetwork (35.9s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0407 12:58:53.208271  878594 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0407 12:58:53.225102  878594 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0407 12:58:53.225869  878594 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0407 12:58:53.225892  878594 cli_runner.go:164] Run: docker network inspect existing-network
W0407 12:58:53.242620  878594 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0407 12:58:53.242649  878594 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0407 12:58:53.242667  878594 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0407 12:58:53.243585  878594 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0407 12:58:53.260480  878594 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-625dcc49b89d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8e:0e:5a:42:67:be} reservation:<nil>}
I0407 12:58:53.264117  878594 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I0407 12:58:53.264573  878594 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001be9710}
I0407 12:58:53.265180  878594 network_create.go:124] attempt to create docker network existing-network 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
I0407 12:58:53.265253  878594 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0407 12:58:53.337292  878594 network_create.go:108] docker network existing-network 192.168.67.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-315928 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-315928 --network=existing-network: (33.653158173s)
helpers_test.go:175: Cleaning up "existing-network-315928" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-315928
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-315928: (2.081454558s)
I0407 12:59:29.089743  878594 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (35.90s)

                                                
                                    
x
+
TestKicCustomSubnet (36.94s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-463512 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-463512 --subnet=192.168.60.0/24: (34.621370372s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-463512 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-463512" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-463512
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-463512: (2.294083803s)
--- PASS: TestKicCustomSubnet (36.94s)

                                                
                                    
x
+
TestKicStaticIP (35.01s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-379542 --static-ip=192.168.200.200
E0407 13:00:07.904574  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/addons-596243/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-379542 --static-ip=192.168.200.200: (32.710468724s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-379542 ip
helpers_test.go:175: Cleaning up "static-ip-379542" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-379542
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-379542: (2.135781233s)
--- PASS: TestKicStaticIP (35.01s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (67.66s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-982072 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-982072 --driver=docker  --container-runtime=containerd: (30.391127275s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-984608 --driver=docker  --container-runtime=containerd
E0407 13:01:30.973954  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/addons-596243/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-984608 --driver=docker  --container-runtime=containerd: (31.326999372s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-982072
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
E0407 13:01:43.455111  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/functional-062962/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-984608
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-984608" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-984608
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-984608: (2.167024442s)
helpers_test.go:175: Cleaning up "first-982072" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-982072
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-982072: (2.352589893s)
--- PASS: TestMinikubeProfile (67.66s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.12s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-629222 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-629222 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.116329042s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-629222 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.66s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-631429 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-631429 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.662906682s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-631429 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-629222 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-629222 --alsologtostderr -v=5: (1.637129551s)
--- PASS: TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-631429 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-631429
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-631429: (1.226680808s)
--- PASS: TestMountStart/serial/Stop (1.23s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.68s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-631429
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-631429: (6.677187185s)
--- PASS: TestMountStart/serial/RestartStopped (7.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-631429 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (65.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-816869 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-816869 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m5.26777969s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (65.77s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (20.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-816869 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-816869 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-816869 -- rollout status deployment/busybox: (18.852555993s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-816869 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-816869 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-816869 -- exec busybox-58667487b6-522f2 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-816869 -- exec busybox-58667487b6-8j549 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-816869 -- exec busybox-58667487b6-522f2 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-816869 -- exec busybox-58667487b6-8j549 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-816869 -- exec busybox-58667487b6-522f2 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-816869 -- exec busybox-58667487b6-8j549 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (20.84s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-816869 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-816869 -- exec busybox-58667487b6-522f2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-816869 -- exec busybox-58667487b6-522f2 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-816869 -- exec busybox-58667487b6-8j549 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-816869 -- exec busybox-58667487b6-8j549 -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.10s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-816869 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-816869 -v 3 --alsologtostderr: (16.527063239s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (17.25s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-816869 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.99s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 cp testdata/cp-test.txt multinode-816869:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 ssh -n multinode-816869 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 cp multinode-816869:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2348078820/001/cp-test_multinode-816869.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 ssh -n multinode-816869 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 cp multinode-816869:/home/docker/cp-test.txt multinode-816869-m02:/home/docker/cp-test_multinode-816869_multinode-816869-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 ssh -n multinode-816869 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 ssh -n multinode-816869-m02 "sudo cat /home/docker/cp-test_multinode-816869_multinode-816869-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 cp multinode-816869:/home/docker/cp-test.txt multinode-816869-m03:/home/docker/cp-test_multinode-816869_multinode-816869-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 ssh -n multinode-816869 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 ssh -n multinode-816869-m03 "sudo cat /home/docker/cp-test_multinode-816869_multinode-816869-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 cp testdata/cp-test.txt multinode-816869-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 ssh -n multinode-816869-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 cp multinode-816869-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2348078820/001/cp-test_multinode-816869-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 ssh -n multinode-816869-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 cp multinode-816869-m02:/home/docker/cp-test.txt multinode-816869:/home/docker/cp-test_multinode-816869-m02_multinode-816869.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 ssh -n multinode-816869-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 ssh -n multinode-816869 "sudo cat /home/docker/cp-test_multinode-816869-m02_multinode-816869.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 cp multinode-816869-m02:/home/docker/cp-test.txt multinode-816869-m03:/home/docker/cp-test_multinode-816869-m02_multinode-816869-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 ssh -n multinode-816869-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 ssh -n multinode-816869-m03 "sudo cat /home/docker/cp-test_multinode-816869-m02_multinode-816869-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 cp testdata/cp-test.txt multinode-816869-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 ssh -n multinode-816869-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 cp multinode-816869-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2348078820/001/cp-test_multinode-816869-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 ssh -n multinode-816869-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 cp multinode-816869-m03:/home/docker/cp-test.txt multinode-816869:/home/docker/cp-test_multinode-816869-m03_multinode-816869.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 ssh -n multinode-816869-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 ssh -n multinode-816869 "sudo cat /home/docker/cp-test_multinode-816869-m03_multinode-816869.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 cp multinode-816869-m03:/home/docker/cp-test.txt multinode-816869-m02:/home/docker/cp-test_multinode-816869-m03_multinode-816869-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 ssh -n multinode-816869-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 ssh -n multinode-816869-m02 "sudo cat /home/docker/cp-test_multinode-816869-m03_multinode-816869-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.24s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-816869 node stop m03: (1.215079277s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-816869 status: exit status 7 (506.559325ms)

                                                
                                                
-- stdout --
	multinode-816869
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-816869-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-816869-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-816869 status --alsologtostderr: exit status 7 (511.680458ms)

                                                
                                                
-- stdout --
	multinode-816869
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-816869-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-816869-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 13:04:13.032300 1005311 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:04:13.032538 1005311 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:04:13.032565 1005311 out.go:358] Setting ErrFile to fd 2...
	I0407 13:04:13.032583 1005311 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:04:13.032871 1005311 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-873072/.minikube/bin
	I0407 13:04:13.033108 1005311 out.go:352] Setting JSON to false
	I0407 13:04:13.033173 1005311 mustload.go:65] Loading cluster: multinode-816869
	I0407 13:04:13.033211 1005311 notify.go:220] Checking for updates...
	I0407 13:04:13.033747 1005311 config.go:182] Loaded profile config "multinode-816869": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0407 13:04:13.034079 1005311 status.go:174] checking status of multinode-816869 ...
	I0407 13:04:13.034834 1005311 cli_runner.go:164] Run: docker container inspect multinode-816869 --format={{.State.Status}}
	I0407 13:04:13.053984 1005311 status.go:371] multinode-816869 host status = "Running" (err=<nil>)
	I0407 13:04:13.054007 1005311 host.go:66] Checking if "multinode-816869" exists ...
	I0407 13:04:13.054322 1005311 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-816869
	I0407 13:04:13.078724 1005311 host.go:66] Checking if "multinode-816869" exists ...
	I0407 13:04:13.079054 1005311 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 13:04:13.079093 1005311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-816869
	I0407 13:04:13.099073 1005311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34018 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/multinode-816869/id_rsa Username:docker}
	I0407 13:04:13.187070 1005311 ssh_runner.go:195] Run: systemctl --version
	I0407 13:04:13.191643 1005311 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:04:13.211679 1005311 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 13:04:13.271551 1005311 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-04-07 13:04:13.261740382 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Serv
erErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0407 13:04:13.272124 1005311 kubeconfig.go:125] found "multinode-816869" server: "https://192.168.58.2:8443"
	I0407 13:04:13.272164 1005311 api_server.go:166] Checking apiserver status ...
	I0407 13:04:13.272213 1005311 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:04:13.283843 1005311 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1495/cgroup
	I0407 13:04:13.293792 1005311 api_server.go:182] apiserver freezer: "4:freezer:/docker/e2cd0109b5c0fd78fe29374cee8ea7afb480f56fd028ca0ad1936bccffd3eec3/kubepods/burstable/pod553a74d8c372bc52138fba46beec9aea/d685cf394ec69843a6846b7644acbfbfc69600201205e487e0f43b478ec1ccc3"
	I0407 13:04:13.293871 1005311 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e2cd0109b5c0fd78fe29374cee8ea7afb480f56fd028ca0ad1936bccffd3eec3/kubepods/burstable/pod553a74d8c372bc52138fba46beec9aea/d685cf394ec69843a6846b7644acbfbfc69600201205e487e0f43b478ec1ccc3/freezer.state
	I0407 13:04:13.303064 1005311 api_server.go:204] freezer state: "THAWED"
	I0407 13:04:13.303093 1005311 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0407 13:04:13.310873 1005311 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0407 13:04:13.310909 1005311 status.go:463] multinode-816869 apiserver status = Running (err=<nil>)
	I0407 13:04:13.310920 1005311 status.go:176] multinode-816869 status: &{Name:multinode-816869 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 13:04:13.310937 1005311 status.go:174] checking status of multinode-816869-m02 ...
	I0407 13:04:13.311244 1005311 cli_runner.go:164] Run: docker container inspect multinode-816869-m02 --format={{.State.Status}}
	I0407 13:04:13.329686 1005311 status.go:371] multinode-816869-m02 host status = "Running" (err=<nil>)
	I0407 13:04:13.329745 1005311 host.go:66] Checking if "multinode-816869-m02" exists ...
	I0407 13:04:13.330084 1005311 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-816869-m02
	I0407 13:04:13.348430 1005311 host.go:66] Checking if "multinode-816869-m02" exists ...
	I0407 13:04:13.348745 1005311 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 13:04:13.348788 1005311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-816869-m02
	I0407 13:04:13.366730 1005311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34023 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/multinode-816869-m02/id_rsa Username:docker}
	I0407 13:04:13.455118 1005311 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:04:13.467240 1005311 status.go:176] multinode-816869-m02 status: &{Name:multinode-816869-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0407 13:04:13.467276 1005311 status.go:174] checking status of multinode-816869-m03 ...
	I0407 13:04:13.467620 1005311 cli_runner.go:164] Run: docker container inspect multinode-816869-m03 --format={{.State.Status}}
	I0407 13:04:13.485476 1005311 status.go:371] multinode-816869-m03 host status = "Stopped" (err=<nil>)
	I0407 13:04:13.485502 1005311 status.go:384] host is not running, skipping remaining checks
	I0407 13:04:13.485509 1005311 status.go:176] multinode-816869-m03 status: &{Name:multinode-816869-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.23s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-816869 node start m03 -v=7 --alsologtostderr: (8.803001032s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.56s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (82.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-816869
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-816869
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-816869: (25.021852924s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-816869 --wait=true -v=8 --alsologtostderr
E0407 13:05:07.904048  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/addons-596243/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-816869 --wait=true -v=8 --alsologtostderr: (57.146469804s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-816869
--- PASS: TestMultiNode/serial/RestartKeepsNodes (82.34s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-816869 node delete m03: (4.708414291s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.37s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-816869 stop: (23.880178282s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-816869 status: exit status 7 (98.6771ms)

                                                
                                                
-- stdout --
	multinode-816869
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-816869-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-816869 status --alsologtostderr: exit status 7 (102.285702ms)

                                                
                                                
-- stdout --
	multinode-816869
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-816869-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 13:06:14.785419 1013566 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:06:14.785632 1013566 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:06:14.785658 1013566 out.go:358] Setting ErrFile to fd 2...
	I0407 13:06:14.785678 1013566 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:06:14.786007 1013566 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-873072/.minikube/bin
	I0407 13:06:14.786241 1013566 out.go:352] Setting JSON to false
	I0407 13:06:14.786306 1013566 mustload.go:65] Loading cluster: multinode-816869
	I0407 13:06:14.786375 1013566 notify.go:220] Checking for updates...
	I0407 13:06:14.787427 1013566 config.go:182] Loaded profile config "multinode-816869": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0407 13:06:14.787474 1013566 status.go:174] checking status of multinode-816869 ...
	I0407 13:06:14.788123 1013566 cli_runner.go:164] Run: docker container inspect multinode-816869 --format={{.State.Status}}
	I0407 13:06:14.808391 1013566 status.go:371] multinode-816869 host status = "Stopped" (err=<nil>)
	I0407 13:06:14.808411 1013566 status.go:384] host is not running, skipping remaining checks
	I0407 13:06:14.808424 1013566 status.go:176] multinode-816869 status: &{Name:multinode-816869 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 13:06:14.808458 1013566 status.go:174] checking status of multinode-816869-m02 ...
	I0407 13:06:14.808772 1013566 cli_runner.go:164] Run: docker container inspect multinode-816869-m02 --format={{.State.Status}}
	I0407 13:06:14.836967 1013566 status.go:371] multinode-816869-m02 host status = "Stopped" (err=<nil>)
	I0407 13:06:14.836987 1013566 status.go:384] host is not running, skipping remaining checks
	I0407 13:06:14.836994 1013566 status.go:176] multinode-816869-m02 status: &{Name:multinode-816869-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.08s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (46.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-816869 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0407 13:06:43.454476  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/functional-062962/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-816869 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (46.203710541s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-816869 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (46.93s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-816869
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-816869-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-816869-m02 --driver=docker  --container-runtime=containerd: exit status 14 (119.426011ms)

                                                
                                                
-- stdout --
	* [multinode-816869-m02] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20602
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20602-873072/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-873072/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-816869-m02' is duplicated with machine name 'multinode-816869-m02' in profile 'multinode-816869'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-816869-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-816869-m03 --driver=docker  --container-runtime=containerd: (32.467646665s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-816869
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-816869: exit status 80 (336.457544ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-816869 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-816869-m03 already exists in multinode-816869-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-816869-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-816869-m03: (2.008943958s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.00s)

                                                
                                    
x
+
TestPreload (120.89s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-766271 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0407 13:08:06.521883  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/functional-062962/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-766271 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m24.246939795s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-766271 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-766271 image pull gcr.io/k8s-minikube/busybox: (1.946913372s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-766271
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-766271: (12.053084149s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-766271 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-766271 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (19.792224012s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-766271 image list
helpers_test.go:175: Cleaning up "test-preload-766271" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-766271
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-766271: (2.438047273s)
--- PASS: TestPreload (120.89s)

                                                
                                    
x
+
TestScheduledStopUnix (106.44s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-973356 --memory=2048 --driver=docker  --container-runtime=containerd
E0407 13:10:07.904441  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/addons-596243/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-973356 --memory=2048 --driver=docker  --container-runtime=containerd: (29.506730971s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-973356 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-973356 -n scheduled-stop-973356
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-973356 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0407 13:10:11.825680  878594 retry.go:31] will retry after 126.184µs: open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/scheduled-stop-973356/pid: no such file or directory
I0407 13:10:11.826176  878594 retry.go:31] will retry after 91.363µs: open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/scheduled-stop-973356/pid: no such file or directory
I0407 13:10:11.826458  878594 retry.go:31] will retry after 177.379µs: open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/scheduled-stop-973356/pid: no such file or directory
I0407 13:10:11.827596  878594 retry.go:31] will retry after 415.295µs: open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/scheduled-stop-973356/pid: no such file or directory
I0407 13:10:11.829033  878594 retry.go:31] will retry after 266.958µs: open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/scheduled-stop-973356/pid: no such file or directory
I0407 13:10:11.830169  878594 retry.go:31] will retry after 638.16µs: open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/scheduled-stop-973356/pid: no such file or directory
I0407 13:10:11.831301  878594 retry.go:31] will retry after 884.205µs: open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/scheduled-stop-973356/pid: no such file or directory
I0407 13:10:11.832441  878594 retry.go:31] will retry after 2.133377ms: open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/scheduled-stop-973356/pid: no such file or directory
I0407 13:10:11.835663  878594 retry.go:31] will retry after 2.500831ms: open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/scheduled-stop-973356/pid: no such file or directory
I0407 13:10:11.838959  878594 retry.go:31] will retry after 2.638615ms: open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/scheduled-stop-973356/pid: no such file or directory
I0407 13:10:11.842288  878594 retry.go:31] will retry after 5.599495ms: open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/scheduled-stop-973356/pid: no such file or directory
I0407 13:10:11.848691  878594 retry.go:31] will retry after 6.76872ms: open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/scheduled-stop-973356/pid: no such file or directory
I0407 13:10:11.856388  878594 retry.go:31] will retry after 15.749899ms: open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/scheduled-stop-973356/pid: no such file or directory
I0407 13:10:11.872625  878594 retry.go:31] will retry after 15.82718ms: open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/scheduled-stop-973356/pid: no such file or directory
I0407 13:10:11.888879  878594 retry.go:31] will retry after 14.763025ms: open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/scheduled-stop-973356/pid: no such file or directory
I0407 13:10:11.904087  878594 retry.go:31] will retry after 47.602824ms: open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/scheduled-stop-973356/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-973356 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-973356 -n scheduled-stop-973356
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-973356
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-973356 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-973356
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-973356: exit status 7 (69.926107ms)

                                                
                                                
-- stdout --
	scheduled-stop-973356
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-973356 -n scheduled-stop-973356
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-973356 -n scheduled-stop-973356: exit status 7 (68.780295ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-973356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-973356
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-973356: (5.327248576s)
--- PASS: TestScheduledStopUnix (106.44s)

                                                
                                    
x
+
TestInsufficientStorage (11.52s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-376002 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-376002 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.983825545s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"95e97971-02a0-4775-bb5a-4303766d1b17","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-376002] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1f4ccfe0-a4e5-4717-9025-e56a64d4c64f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20602"}}
	{"specversion":"1.0","id":"90fda5ab-9234-42c7-864a-22dbf0e4007f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1f401125-a1c3-48d5-9405-485b589c19ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20602-873072/kubeconfig"}}
	{"specversion":"1.0","id":"8ed89680-b850-4305-bb38-f161e1b82836","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-873072/.minikube"}}
	{"specversion":"1.0","id":"7c2cfded-877a-450f-945d-40bcce3e33ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"63c0bc87-e62f-41e9-b095-91c34e1d2f28","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9bd1737e-fb96-4f1a-88ac-bdf482ee9d6d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"df2d796e-5628-4d3f-8e61-d3ebad9f7728","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"a2d2a23e-da93-4125-9970-c099439afbd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a5d33035-8efd-4f23-bc61-440394522ba0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"68ea2e35-de90-4aac-9410-f524b72f1cd5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-376002\" primary control-plane node in \"insufficient-storage-376002\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"3a2753ee-81db-44ed-b74f-89a2dd9dc68b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.46-1743675393-20591 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"79a17e5e-02b7-4064-ab10-d0c7c40bccfc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"1a6ec537-51ab-4d2e-baa6-f23c9c38a961","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-376002 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-376002 --output=json --layout=cluster: exit status 7 (295.690694ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-376002","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-376002","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0407 13:11:37.499070 1032754 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-376002" does not appear in /home/jenkins/minikube-integration/20602-873072/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-376002 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-376002 --output=json --layout=cluster: exit status 7 (290.326564ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-376002","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-376002","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0407 13:11:37.790662 1032815 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-376002" does not appear in /home/jenkins/minikube-integration/20602-873072/kubeconfig
	E0407 13:11:37.801034 1032815 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/insufficient-storage-376002/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-376002" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-376002
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-376002: (1.948297858s)
--- PASS: TestInsufficientStorage (11.52s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (85.96s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2755997904 start -p running-upgrade-084122 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2755997904 start -p running-upgrade-084122 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (49.711413922s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-084122 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0407 13:18:10.975680  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/addons-596243/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-084122 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (32.873355815s)
helpers_test.go:175: Cleaning up "running-upgrade-084122" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-084122
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-084122: (2.641836594s)
--- PASS: TestRunningBinaryUpgrade (85.96s)

                                                
                                    
x
+
TestKubernetesUpgrade (352.93s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-685745 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-685745 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (59.152818984s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-685745
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-685745: (1.277058567s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-685745 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-685745 status --format={{.Host}}: exit status 7 (130.609692ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-685745 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-685745 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m40.491476781s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-685745 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-685745 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-685745 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (159.456647ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-685745] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20602
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20602-873072/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-873072/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-685745
	    minikube start -p kubernetes-upgrade-685745 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6857452 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.2, by running:
	    
	    minikube start -p kubernetes-upgrade-685745 --kubernetes-version=v1.32.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-685745 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-685745 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (9.006941377s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-685745" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-685745
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-685745: (2.532517529s)
--- PASS: TestKubernetesUpgrade (352.93s)

                                                
                                    
x
+
TestMissingContainerUpgrade (201.17s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2290399484 start -p missing-upgrade-043303 --memory=2200 --driver=docker  --container-runtime=containerd
E0407 13:11:43.454579  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/functional-062962/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2290399484 start -p missing-upgrade-043303 --memory=2200 --driver=docker  --container-runtime=containerd: (1m32.183290425s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-043303
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-043303: (10.349919202s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-043303
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-043303 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-043303 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m35.116941163s)
helpers_test.go:175: Cleaning up "missing-upgrade-043303" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-043303
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-043303: (2.695926972s)
--- PASS: TestMissingContainerUpgrade (201.17s)

                                                
                                    
x
+
TestPause/serial/Start (67.49s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-471704 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-471704 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m7.487923548s)
--- PASS: TestPause/serial/Start (67.49s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.02s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-471704 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-471704 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.994720845s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.02s)

                                                
                                    
x
+
TestPause/serial/Pause (1.06s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-471704 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-471704 --alsologtostderr -v=5: (1.058288187s)
--- PASS: TestPause/serial/Pause (1.06s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.46s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-471704 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-471704 --output=json --layout=cluster: exit status 2 (462.183827ms)

                                                
                                                
-- stdout --
	{"Name":"pause-471704","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-471704","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.46s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.89s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-471704 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.89s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.19s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-471704 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-471704 --alsologtostderr -v=5: (1.194361855s)
--- PASS: TestPause/serial/PauseAgain (1.19s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.2s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-471704 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-471704 --alsologtostderr -v=5: (3.195534484s)
--- PASS: TestPause/serial/DeletePaused (3.20s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (6.49s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (6.435343078s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-471704
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-471704: exit status 1 (17.979928ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-471704: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (6.49s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (107.87s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3768771565 start -p stopped-upgrade-094417 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0407 13:15:07.904172  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/addons-596243/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3768771565 start -p stopped-upgrade-094417 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (44.735633916s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3768771565 -p stopped-upgrade-094417 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3768771565 -p stopped-upgrade-094417 stop: (19.981559611s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-094417 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0407 13:16:43.454613  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/functional-062962/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-094417 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (43.150102305s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (107.87s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.39s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-094417
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-094417: (1.38850922s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-031133 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-031133 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (126.456183ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-031133] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20602
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20602-873072/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-873072/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-031133 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-031133 --driver=docker  --container-runtime=containerd: (42.418081123s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-031133 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-062788 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-062788 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (196.295569ms)

                                                
                                                
-- stdout --
	* [false-062788] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20602
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20602-873072/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-873072/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 13:19:09.650430 1069336 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:19:09.650989 1069336 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:19:09.651033 1069336 out.go:358] Setting ErrFile to fd 2...
	I0407 13:19:09.651053 1069336 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:19:09.651847 1069336 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-873072/.minikube/bin
	I0407 13:19:09.652566 1069336 out.go:352] Setting JSON to false
	I0407 13:19:09.653652 1069336 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":18094,"bootTime":1744013856,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0407 13:19:09.653853 1069336 start.go:139] virtualization:  
	I0407 13:19:09.657566 1069336 out.go:177] * [false-062788] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0407 13:19:09.659892 1069336 out.go:177]   - MINIKUBE_LOCATION=20602
	I0407 13:19:09.659964 1069336 notify.go:220] Checking for updates...
	I0407 13:19:09.666067 1069336 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 13:19:09.669358 1069336 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20602-873072/kubeconfig
	I0407 13:19:09.672433 1069336 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-873072/.minikube
	I0407 13:19:09.675432 1069336 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0407 13:19:09.678385 1069336 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 13:19:09.681830 1069336 config.go:182] Loaded profile config "NoKubernetes-031133": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0407 13:19:09.681971 1069336 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 13:19:09.715410 1069336 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0407 13:19:09.715542 1069336 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 13:19:09.779627 1069336 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-04-07 13:19:09.770628771 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Serv
erErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0407 13:19:09.779736 1069336 docker.go:318] overlay module found
	I0407 13:19:09.782951 1069336 out.go:177] * Using the docker driver based on user configuration
	I0407 13:19:09.785845 1069336 start.go:297] selected driver: docker
	I0407 13:19:09.785869 1069336 start.go:901] validating driver "docker" against <nil>
	I0407 13:19:09.785884 1069336 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 13:19:09.789456 1069336 out.go:201] 
	W0407 13:19:09.792475 1069336 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0407 13:19:09.795244 1069336 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-062788 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-062788

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-062788

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-062788

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-062788

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-062788

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-062788

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-062788

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-062788

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-062788

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-062788

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062788"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062788"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062788"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-062788

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062788"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062788"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-062788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-062788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-062788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-062788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-062788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-062788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-062788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-062788" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062788"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062788"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062788"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062788"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062788"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-062788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-062788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-062788" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062788"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062788"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062788"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062788"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062788"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-062788

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062788"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062788"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062788"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062788"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062788"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062788"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062788"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062788"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062788"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062788"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062788"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062788"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062788"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062788"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062788"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062788"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062788"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062788"

                                                
                                                
----------------------- debugLogs end: false-062788 [took: 3.957642645s] --------------------------------
helpers_test.go:175: Cleaning up "false-062788" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-062788
--- PASS: TestNetworkPlugins/group/false (4.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-031133 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-031133 --no-kubernetes --driver=docker  --container-runtime=containerd: (15.751676436s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-031133 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-031133 status -o json: exit status 2 (404.249909ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-031133","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-031133
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-031133: (2.664808421s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-031133 --no-kubernetes --driver=docker  --container-runtime=containerd
E0407 13:20:07.904258  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/addons-596243/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-031133 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.737888284s)
--- PASS: TestNoKubernetes/serial/Start (5.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-031133 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-031133 "sudo systemctl is-active --quiet service kubelet": exit status 1 (274.847545ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-031133
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-031133: (1.23108863s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-031133 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-031133 --driver=docker  --container-runtime=containerd: (7.038213929s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-031133 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-031133 "sudo systemctl is-active --quiet service kubelet": exit status 1 (278.735696ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (173.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-856421 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0407 13:21:43.455436  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/functional-062962/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-856421 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m53.883432101s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (173.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (73.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-789804 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-789804 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2: (1m13.69491419s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (73.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-789804 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [642cda38-d460-4c25-9676-d0fef837159c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [642cda38-d460-4c25-9676-d0fef837159c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003496105s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-789804 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-856421 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [23721e8a-2167-45b6-8ec6-b4d7f210a558] Pending
helpers_test.go:344: "busybox" [23721e8a-2167-45b6-8ec6-b4d7f210a558] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [23721e8a-2167-45b6-8ec6-b4d7f210a558] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.012890598s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-856421 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-789804 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-789804 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.067767741s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-789804 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-789804 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-789804 --alsologtostderr -v=3: (12.093406544s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-856421 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-856421 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-856421 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-856421 --alsologtostderr -v=3: (12.265787267s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-789804 -n no-preload-789804
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-789804 -n no-preload-789804: exit status 7 (73.340801ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-789804 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (268.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-789804 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-789804 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2: (4m27.913084417s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-789804 -n no-preload-789804
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (268.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-856421 -n old-k8s-version-856421
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-856421 -n old-k8s-version-856421: exit status 7 (106.666031ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-856421 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-8qm7v" [58773a31-dde6-492b-a529-95e3ad590eda] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003305092s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-8qm7v" [58773a31-dde6-492b-a529-95e3ad590eda] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00343845s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-789804 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-789804 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-789804 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-789804 -n no-preload-789804
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-789804 -n no-preload-789804: exit status 2 (328.473662ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-789804 -n no-preload-789804
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-789804 -n no-preload-789804: exit status 2 (332.751909ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-789804 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-789804 -n no-preload-789804
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-789804 -n no-preload-789804
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (50.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-688390 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2
E0407 13:30:07.904381  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/addons-596243/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-688390 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2: (50.767972419s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (50.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-688390 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [abd268fe-9c6f-48fe-b6be-74a54efef783] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [abd268fe-9c6f-48fe-b6be-74a54efef783] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003587359s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-688390 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-688390 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-688390 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.027547697s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-688390 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-688390 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-688390 --alsologtostderr -v=3: (12.098836355s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-688390 -n embed-certs-688390
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-688390 -n embed-certs-688390: exit status 7 (79.431642ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-688390 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (274.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-688390 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-688390 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2: (4m34.413308297s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-688390 -n embed-certs-688390
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (274.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-nrhfv" [eb23b40c-fb9c-4b5d-91d5-051560f9b449] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004024175s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-nrhfv" [eb23b40c-fb9c-4b5d-91d5-051560f9b449] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002955455s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-856421 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-856421 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-856421 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-856421 -n old-k8s-version-856421
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-856421 -n old-k8s-version-856421: exit status 2 (353.083095ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-856421 -n old-k8s-version-856421
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-856421 -n old-k8s-version-856421: exit status 2 (342.832339ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-856421 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-856421 -n old-k8s-version-856421
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-856421 -n old-k8s-version-856421
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (64.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-843385 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2
E0407 13:31:43.455453  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/functional-062962/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-843385 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2: (1m4.422171891s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (64.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-843385 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ec33081d-6495-40c9-9a6f-413d1689fbce] Pending
helpers_test.go:344: "busybox" [ec33081d-6495-40c9-9a6f-413d1689fbce] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ec33081d-6495-40c9-9a6f-413d1689fbce] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004004869s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-843385 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-843385 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-843385 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.012245223s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-843385 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-843385 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-843385 --alsologtostderr -v=3: (12.150382589s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-843385 -n default-k8s-diff-port-843385
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-843385 -n default-k8s-diff-port-843385: exit status 7 (77.547824ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-843385 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (279.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-843385 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2
E0407 13:34:19.114559  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/no-preload-789804/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:34:19.120966  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/no-preload-789804/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:34:19.132398  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/no-preload-789804/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:34:19.153855  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/no-preload-789804/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:34:19.195329  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/no-preload-789804/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:34:19.276831  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/no-preload-789804/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:34:19.438351  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/no-preload-789804/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:34:19.759932  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/no-preload-789804/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:34:20.402292  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/no-preload-789804/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:34:21.684041  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/no-preload-789804/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:34:24.245533  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/no-preload-789804/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:34:24.303084  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/old-k8s-version-856421/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:34:24.309421  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/old-k8s-version-856421/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:34:24.320901  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/old-k8s-version-856421/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:34:24.342371  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/old-k8s-version-856421/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:34:24.383733  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/old-k8s-version-856421/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:34:24.465112  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/old-k8s-version-856421/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:34:24.626662  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/old-k8s-version-856421/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:34:24.948617  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/old-k8s-version-856421/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:34:25.590018  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/old-k8s-version-856421/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:34:26.871493  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/old-k8s-version-856421/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:34:29.367553  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/no-preload-789804/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:34:29.433619  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/old-k8s-version-856421/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:34:34.554920  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/old-k8s-version-856421/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:34:39.609014  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/no-preload-789804/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:34:44.796530  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/old-k8s-version-856421/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:34:50.977750  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/addons-596243/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:35:00.091375  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/no-preload-789804/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:35:05.278937  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/old-k8s-version-856421/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:35:07.904073  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/addons-596243/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-843385 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2: (4m39.646932914s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-843385 -n default-k8s-diff-port-843385
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (279.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-tlhm9" [0844f7ce-1d2e-441d-b4f6-14145d285dbf] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004030616s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-tlhm9" [0844f7ce-1d2e-441d-b4f6-14145d285dbf] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003353103s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-688390 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-688390 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-688390 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-688390 -n embed-certs-688390
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-688390 -n embed-certs-688390: exit status 2 (320.384987ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-688390 -n embed-certs-688390
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-688390 -n embed-certs-688390: exit status 2 (344.861506ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-688390 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-688390 -n embed-certs-688390
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-688390 -n embed-certs-688390
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (37.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-505002 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2
E0407 13:35:41.059291  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/no-preload-789804/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:35:46.240392  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/old-k8s-version-856421/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-505002 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2: (37.934642963s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (37.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-505002 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-505002 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.327796579s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-505002 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-505002 --alsologtostderr -v=3: (1.259289326s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-505002 -n newest-cni-505002
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-505002 -n newest-cni-505002: exit status 7 (79.647806ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-505002 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-505002 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-505002 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2: (15.185280819s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-505002 -n newest-cni-505002
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-505002 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-505002 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-505002 -n newest-cni-505002
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-505002 -n newest-cni-505002: exit status 2 (326.675082ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-505002 -n newest-cni-505002
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-505002 -n newest-cni-505002: exit status 2 (326.172995ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-505002 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-505002 -n newest-cni-505002
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-505002 -n newest-cni-505002
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (66.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-062788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E0407 13:36:43.454560  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/functional-062962/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:37:02.980852  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/no-preload-789804/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:37:08.162429  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/old-k8s-version-856421/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-062788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m6.75120017s)
--- PASS: TestNetworkPlugins/group/auto/Start (66.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-fwqv9" [ecb71e35-6865-43da-aab9-988b13338b99] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003158245s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-fwqv9" [ecb71e35-6865-43da-aab9-988b13338b99] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003494456s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-843385 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-843385 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-843385 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-843385 -n default-k8s-diff-port-843385
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-843385 -n default-k8s-diff-port-843385: exit status 2 (345.948595ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-843385 -n default-k8s-diff-port-843385
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-843385 -n default-k8s-diff-port-843385: exit status 2 (340.707061ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-843385 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-843385 --alsologtostderr -v=1: (1.025408925s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-843385 -n default-k8s-diff-port-843385
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-843385 -n default-k8s-diff-port-843385
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.64s)
E0407 13:42:43.824437  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/auto-062788/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:42:43.830917  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/auto-062788/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:42:43.842331  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/auto-062788/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:42:43.863866  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/auto-062788/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:42:43.905392  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/auto-062788/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:42:43.986904  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/auto-062788/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:42:44.148405  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/auto-062788/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:42:44.470286  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/auto-062788/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:42:45.112569  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/auto-062788/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:42:46.394889  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/auto-062788/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:42:46.783897  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/default-k8s-diff-port-843385/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:42:48.956317  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/auto-062788/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:42:54.077846  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/auto-062788/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-062788 "pgrep -a kubelet"
I0407 13:37:43.498089  878594 config.go:182] Loaded profile config "auto-062788": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-062788 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-v489d" [68bb534f-3db9-48f7-ba67-69c35e113922] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-v489d" [68bb534f-3db9-48f7-ba67-69c35e113922] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004010199s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (67.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-062788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-062788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m7.618316298s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (67.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-062788 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-062788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-062788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (64.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-062788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-062788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m4.325985568s)
--- PASS: TestNetworkPlugins/group/calico/Start (64.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-662xb" [ef55c2da-60c9-4f6b-9623-a7ea8c85d616] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003126364s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-062788 "pgrep -a kubelet"
I0407 13:39:02.690707  878594 config.go:182] Loaded profile config "kindnet-062788": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-062788 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-l9bcw" [571cf100-f656-4617-b5c4-758cc4405dd9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-l9bcw" [571cf100-f656-4617-b5c4-758cc4405dd9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004835227s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-062788 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-062788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-062788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-qtspf" [4fe9da1e-f755-4eaf-9e71-85069389104d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003937442s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-062788 "pgrep -a kubelet"
I0407 13:39:32.976176  878594 config.go:182] Loaded profile config "calico-062788": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-062788 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-js6jm" [1e2f7d06-511b-46b4-8fd8-312a15db793a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-js6jm" [1e2f7d06-511b-46b4-8fd8-312a15db793a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.00364898s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (57.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-062788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-062788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (57.887938732s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (57.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-062788 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-062788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-062788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (52.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-062788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-062788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (52.378064463s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (52.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-062788 "pgrep -a kubelet"
I0407 13:40:35.786492  878594 config.go:182] Loaded profile config "custom-flannel-062788": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-062788 replace --force -f testdata/netcat-deployment.yaml
I0407 13:40:36.182662  878594 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-zk5pz" [ab458ab1-87c2-4da3-8def-e402a064a38d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-zk5pz" [ab458ab1-87c2-4da3-8def-e402a064a38d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.006906354s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-062788 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-062788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-062788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-062788 "pgrep -a kubelet"
I0407 13:41:04.996485  878594 config.go:182] Loaded profile config "enable-default-cni-062788": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-062788 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-fvdz2" [38dfb59c-7697-4acd-bcab-0e93da4944ab] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-fvdz2" [38dfb59c-7697-4acd-bcab-0e93da4944ab] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003996265s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (55.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-062788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-062788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (55.06012536s)
--- PASS: TestNetworkPlugins/group/flannel/Start (55.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-062788 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-062788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-062788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (79.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-062788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E0407 13:41:43.454625  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/functional-062962/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-062788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m19.810163091s)
--- PASS: TestNetworkPlugins/group/bridge/Start (79.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-wzqfx" [3fdd83ef-34ba-446a-8b74-f80b40d5d75f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003363756s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-062788 "pgrep -a kubelet"
I0407 13:42:11.278843  878594 config.go:182] Loaded profile config "flannel-062788": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-062788 replace --force -f testdata/netcat-deployment.yaml
I0407 13:42:11.632449  878594 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-nrmtj" [eabb4989-5e67-4cd1-9be9-b39c3a94367a] Pending
helpers_test.go:344: "netcat-5d86dc444-nrmtj" [eabb4989-5e67-4cd1-9be9-b39c3a94367a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-nrmtj" [eabb4989-5e67-4cd1-9be9-b39c3a94367a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004530041s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-062788 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-062788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-062788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-062788 "pgrep -a kubelet"
I0407 13:43:02.768782  878594 config.go:182] Loaded profile config "bridge-062788": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-062788 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-84trt" [838e02ca-336b-4294-85aa-491cf9ece2e8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0407 13:43:04.319480  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/auto-062788/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-84trt" [838e02ca-336b-4294-85aa-491cf9ece2e8] Running
E0407 13:43:07.266185  878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/default-k8s-diff-port-843385/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004455903s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-062788 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-062788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-062788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (30/331)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.6s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-573167 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-573167" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-573167
--- SKIP: TestDownloadOnlyKic (0.60s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1804: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-324520" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-324520
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:631: 
----------------------- debugLogs start: kubenet-062788 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-062788

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-062788

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-062788

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-062788

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-062788

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-062788

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-062788

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-062788

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-062788

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-062788

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062788"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062788"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062788"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-062788

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062788"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062788"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-062788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-062788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-062788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-062788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-062788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-062788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-062788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-062788" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062788"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062788"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062788"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062788"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062788"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-062788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-062788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-062788" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062788"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062788"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062788"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062788"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062788"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-062788

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062788"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062788"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062788"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062788"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062788"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062788"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062788"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062788"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062788"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062788"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062788"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062788"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062788"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062788"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062788"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062788"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062788"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062788"

                                                
                                                
----------------------- debugLogs end: kubenet-062788 [took: 4.723293561s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-062788" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-062788
--- SKIP: TestNetworkPlugins/group/kubenet (4.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:631: 
----------------------- debugLogs start: cilium-062788 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-062788

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-062788

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-062788

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-062788

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-062788

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-062788

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-062788

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-062788

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-062788

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-062788

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062788"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062788"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062788"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-062788

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062788"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062788"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-062788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-062788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-062788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-062788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-062788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-062788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-062788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-062788" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062788"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062788"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062788"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062788"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062788"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-062788

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-062788

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-062788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-062788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-062788

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-062788

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-062788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-062788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-062788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-062788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-062788" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062788"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062788"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062788"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062788"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062788"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-062788

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062788"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062788"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062788"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062788"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062788"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062788"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062788"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062788"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062788"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062788"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062788"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062788"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062788"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062788"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062788"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062788"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062788"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-062788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062788"

                                                
                                                
----------------------- debugLogs end: cilium-062788 [took: 4.948960723s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-062788" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-062788
--- SKIP: TestNetworkPlugins/group/cilium (5.17s)

                                                
                                    
Copied to clipboard