Test Report: Docker_Linux_docker_arm64 20602

                    
                      a90248a4a931d52b681e38138304d5427e54b74a:2025-04-07:39037
                    
                

Test fail (1/346)

Order failed test Duration
313 TestStartStop/group/old-k8s-version/serial/SecondStart 373.35
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (373.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-907855 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-907855 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: exit status 102 (6m10.619422763s)

                                                
                                                
-- stdout --
	* [old-k8s-version-907855] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20602
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20602-902080/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-902080/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-907855" primary control-plane node in "old-k8s-version-907855" cluster
	* Pulling base image v0.0.46-1743675393-20591 ...
	* Restarting existing docker container for "old-k8s-version-907855" ...
	* Preparing Kubernetes v1.20.0 on Docker 28.0.4 ...
	* Verifying Kubernetes components...
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-907855 addons enable metrics-server
	
	* Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 12:56:06.026440 1223988 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:56:06.026617 1223988 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:56:06.026623 1223988 out.go:358] Setting ErrFile to fd 2...
	I0407 12:56:06.026628 1223988 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:56:06.026886 1223988 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-902080/.minikube/bin
	I0407 12:56:06.027306 1223988 out.go:352] Setting JSON to false
	I0407 12:56:06.028506 1223988 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":16710,"bootTime":1744013856,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0407 12:56:06.028572 1223988 start.go:139] virtualization:  
	I0407 12:56:06.032384 1223988 out.go:177] * [old-k8s-version-907855] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0407 12:56:06.035796 1223988 notify.go:220] Checking for updates...
	I0407 12:56:06.040611 1223988 out.go:177]   - MINIKUBE_LOCATION=20602
	I0407 12:56:06.043635 1223988 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:56:06.046582 1223988 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20602-902080/kubeconfig
	I0407 12:56:06.049623 1223988 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-902080/.minikube
	I0407 12:56:06.052580 1223988 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0407 12:56:06.055456 1223988 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 12:56:06.058722 1223988 config.go:182] Loaded profile config "old-k8s-version-907855": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0407 12:56:06.062121 1223988 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	I0407 12:56:06.064990 1223988 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 12:56:06.100301 1223988 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0407 12:56:06.100434 1223988 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 12:56:06.227239 1223988 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:43 OomKillDisable:true NGoroutines:52 SystemTime:2025-04-07 12:56:06.184232415 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0407 12:56:06.227361 1223988 docker.go:318] overlay module found
	I0407 12:56:06.230488 1223988 out.go:177] * Using the docker driver based on existing profile
	I0407 12:56:06.233301 1223988 start.go:297] selected driver: docker
	I0407 12:56:06.233325 1223988 start.go:901] validating driver "docker" against &{Name:old-k8s-version-907855 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-907855 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:56:06.233448 1223988 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 12:56:06.234152 1223988 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 12:56:06.344381 1223988 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-04-07 12:56:06.325754024 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0407 12:56:06.344721 1223988 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 12:56:06.344758 1223988 cni.go:84] Creating CNI manager for ""
	I0407 12:56:06.344834 1223988 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0407 12:56:06.344887 1223988 start.go:340] cluster config:
	{Name:old-k8s-version-907855 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-907855 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:56:06.348207 1223988 out.go:177] * Starting "old-k8s-version-907855" primary control-plane node in "old-k8s-version-907855" cluster
	I0407 12:56:06.351071 1223988 cache.go:121] Beginning downloading kic base image for docker with docker
	I0407 12:56:06.353937 1223988 out.go:177] * Pulling base image v0.0.46-1743675393-20591 ...
	I0407 12:56:06.356886 1223988 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0407 12:56:06.356953 1223988 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20602-902080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0407 12:56:06.356973 1223988 cache.go:56] Caching tarball of preloaded images
	I0407 12:56:06.357100 1223988 preload.go:172] Found /home/jenkins/minikube-integration/20602-902080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0407 12:56:06.357115 1223988 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0407 12:56:06.357226 1223988 profile.go:143] Saving config to /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/old-k8s-version-907855/config.json ...
	I0407 12:56:06.357457 1223988 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local docker daemon
	I0407 12:56:06.382016 1223988 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local docker daemon, skipping pull
	I0407 12:56:06.382041 1223988 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 exists in daemon, skipping load
	I0407 12:56:06.382056 1223988 cache.go:230] Successfully downloaded all kic artifacts
	I0407 12:56:06.382079 1223988 start.go:360] acquireMachinesLock for old-k8s-version-907855: {Name:mkfc9c2f7982f64a84c9ad90928bb59d5c00165b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 12:56:06.382139 1223988 start.go:364] duration metric: took 39.73µs to acquireMachinesLock for "old-k8s-version-907855"
	I0407 12:56:06.382158 1223988 start.go:96] Skipping create...Using existing machine configuration
	I0407 12:56:06.382164 1223988 fix.go:54] fixHost starting: 
	I0407 12:56:06.382413 1223988 cli_runner.go:164] Run: docker container inspect old-k8s-version-907855 --format={{.State.Status}}
	I0407 12:56:06.412746 1223988 fix.go:112] recreateIfNeeded on old-k8s-version-907855: state=Stopped err=<nil>
	W0407 12:56:06.412794 1223988 fix.go:138] unexpected machine state, will restart: <nil>
	I0407 12:56:06.416402 1223988 out.go:177] * Restarting existing docker container for "old-k8s-version-907855" ...
	I0407 12:56:06.419280 1223988 cli_runner.go:164] Run: docker start old-k8s-version-907855
	I0407 12:56:06.794019 1223988 cli_runner.go:164] Run: docker container inspect old-k8s-version-907855 --format={{.State.Status}}
	I0407 12:56:06.817276 1223988 kic.go:430] container "old-k8s-version-907855" state is running.
	I0407 12:56:06.817665 1223988 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-907855
	I0407 12:56:06.846064 1223988 profile.go:143] Saving config to /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/old-k8s-version-907855/config.json ...
	I0407 12:56:06.846307 1223988 machine.go:93] provisionDockerMachine start ...
	I0407 12:56:06.846378 1223988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-907855
	I0407 12:56:06.877839 1223988 main.go:141] libmachine: Using SSH client type: native
	I0407 12:56:06.878153 1223988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 34191 <nil> <nil>}
	I0407 12:56:06.878162 1223988 main.go:141] libmachine: About to run SSH command:
	hostname
	I0407 12:56:06.878816 1223988 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0407 12:56:10.005310 1223988 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-907855
	
	I0407 12:56:10.005361 1223988 ubuntu.go:169] provisioning hostname "old-k8s-version-907855"
	I0407 12:56:10.005457 1223988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-907855
	I0407 12:56:10.026768 1223988 main.go:141] libmachine: Using SSH client type: native
	I0407 12:56:10.027155 1223988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 34191 <nil> <nil>}
	I0407 12:56:10.027195 1223988 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-907855 && echo "old-k8s-version-907855" | sudo tee /etc/hostname
	I0407 12:56:10.179125 1223988 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-907855
	
	I0407 12:56:10.179218 1223988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-907855
	I0407 12:56:10.197353 1223988 main.go:141] libmachine: Using SSH client type: native
	I0407 12:56:10.197661 1223988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 34191 <nil> <nil>}
	I0407 12:56:10.197682 1223988 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-907855' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-907855/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-907855' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 12:56:10.320849 1223988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 12:56:10.320876 1223988 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20602-902080/.minikube CaCertPath:/home/jenkins/minikube-integration/20602-902080/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20602-902080/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20602-902080/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20602-902080/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20602-902080/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20602-902080/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20602-902080/.minikube}
	I0407 12:56:10.320914 1223988 ubuntu.go:177] setting up certificates
	I0407 12:56:10.320925 1223988 provision.go:84] configureAuth start
	I0407 12:56:10.320998 1223988 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-907855
	I0407 12:56:10.339265 1223988 provision.go:143] copyHostCerts
	I0407 12:56:10.339346 1223988 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-902080/.minikube/ca.pem, removing ...
	I0407 12:56:10.339364 1223988 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-902080/.minikube/ca.pem
	I0407 12:56:10.339453 1223988 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-902080/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20602-902080/.minikube/ca.pem (1078 bytes)
	I0407 12:56:10.339617 1223988 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-902080/.minikube/cert.pem, removing ...
	I0407 12:56:10.339625 1223988 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-902080/.minikube/cert.pem
	I0407 12:56:10.339653 1223988 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-902080/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20602-902080/.minikube/cert.pem (1123 bytes)
	I0407 12:56:10.339708 1223988 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-902080/.minikube/key.pem, removing ...
	I0407 12:56:10.339713 1223988 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-902080/.minikube/key.pem
	I0407 12:56:10.339748 1223988 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-902080/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20602-902080/.minikube/key.pem (1675 bytes)
	I0407 12:56:10.339800 1223988 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20602-902080/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20602-902080/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20602-902080/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-907855 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-907855]
	I0407 12:56:10.666725 1223988 provision.go:177] copyRemoteCerts
	I0407 12:56:10.666801 1223988 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 12:56:10.666853 1223988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-907855
	I0407 12:56:10.685730 1223988 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34191 SSHKeyPath:/home/jenkins/minikube-integration/20602-902080/.minikube/machines/old-k8s-version-907855/id_rsa Username:docker}
	I0407 12:56:10.794777 1223988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-902080/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 12:56:10.859821 1223988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-902080/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0407 12:56:10.912476 1223988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-902080/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0407 12:56:10.968309 1223988 provision.go:87] duration metric: took 647.366409ms to configureAuth
	I0407 12:56:10.968332 1223988 ubuntu.go:193] setting minikube options for container-runtime
	I0407 12:56:10.968528 1223988 config.go:182] Loaded profile config "old-k8s-version-907855": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0407 12:56:10.968580 1223988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-907855
	I0407 12:56:10.995736 1223988 main.go:141] libmachine: Using SSH client type: native
	I0407 12:56:10.996048 1223988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 34191 <nil> <nil>}
	I0407 12:56:10.996058 1223988 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0407 12:56:11.151735 1223988 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0407 12:56:11.151797 1223988 ubuntu.go:71] root file system type: overlay
	I0407 12:56:11.151988 1223988 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0407 12:56:11.152116 1223988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-907855
	I0407 12:56:11.189346 1223988 main.go:141] libmachine: Using SSH client type: native
	I0407 12:56:11.189649 1223988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 34191 <nil> <nil>}
	I0407 12:56:11.189727 1223988 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0407 12:56:11.368625 1223988 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0407 12:56:11.368777 1223988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-907855
	I0407 12:56:11.398642 1223988 main.go:141] libmachine: Using SSH client type: native
	I0407 12:56:11.399019 1223988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 34191 <nil> <nil>}
	I0407 12:56:11.399038 1223988 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0407 12:56:11.563202 1223988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 12:56:11.563277 1223988 machine.go:96] duration metric: took 4.716957013s to provisionDockerMachine
	I0407 12:56:11.563305 1223988 start.go:293] postStartSetup for "old-k8s-version-907855" (driver="docker")
	I0407 12:56:11.563348 1223988 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 12:56:11.563447 1223988 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 12:56:11.563557 1223988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-907855
	I0407 12:56:11.593767 1223988 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34191 SSHKeyPath:/home/jenkins/minikube-integration/20602-902080/.minikube/machines/old-k8s-version-907855/id_rsa Username:docker}
	I0407 12:56:11.699253 1223988 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 12:56:11.702556 1223988 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0407 12:56:11.702590 1223988 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0407 12:56:11.702601 1223988 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0407 12:56:11.702609 1223988 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0407 12:56:11.702619 1223988 filesync.go:126] Scanning /home/jenkins/minikube-integration/20602-902080/.minikube/addons for local assets ...
	I0407 12:56:11.702687 1223988 filesync.go:126] Scanning /home/jenkins/minikube-integration/20602-902080/.minikube/files for local assets ...
	I0407 12:56:11.702777 1223988 filesync.go:149] local asset: /home/jenkins/minikube-integration/20602-902080/.minikube/files/etc/ssl/certs/9074612.pem -> 9074612.pem in /etc/ssl/certs
	I0407 12:56:11.702889 1223988 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0407 12:56:11.715641 1223988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-902080/.minikube/files/etc/ssl/certs/9074612.pem --> /etc/ssl/certs/9074612.pem (1708 bytes)
	I0407 12:56:11.764202 1223988 start.go:296] duration metric: took 200.867859ms for postStartSetup
	I0407 12:56:11.764329 1223988 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 12:56:11.764378 1223988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-907855
	I0407 12:56:11.792696 1223988 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34191 SSHKeyPath:/home/jenkins/minikube-integration/20602-902080/.minikube/machines/old-k8s-version-907855/id_rsa Username:docker}
	I0407 12:56:11.886321 1223988 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0407 12:56:11.898560 1223988 fix.go:56] duration metric: took 5.516388097s for fixHost
	I0407 12:56:11.898590 1223988 start.go:83] releasing machines lock for "old-k8s-version-907855", held for 5.516441907s
	I0407 12:56:11.898679 1223988 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-907855
	I0407 12:56:11.929479 1223988 ssh_runner.go:195] Run: cat /version.json
	I0407 12:56:11.929547 1223988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-907855
	I0407 12:56:11.929555 1223988 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0407 12:56:11.929610 1223988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-907855
	I0407 12:56:11.980112 1223988 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34191 SSHKeyPath:/home/jenkins/minikube-integration/20602-902080/.minikube/machines/old-k8s-version-907855/id_rsa Username:docker}
	I0407 12:56:11.988307 1223988 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34191 SSHKeyPath:/home/jenkins/minikube-integration/20602-902080/.minikube/machines/old-k8s-version-907855/id_rsa Username:docker}
	I0407 12:56:12.278713 1223988 ssh_runner.go:195] Run: systemctl --version
	I0407 12:56:12.284241 1223988 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0407 12:56:12.294421 1223988 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0407 12:56:12.319680 1223988 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0407 12:56:12.319809 1223988 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0407 12:56:12.345836 1223988 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0407 12:56:12.383845 1223988 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0407 12:56:12.383925 1223988 start.go:495] detecting cgroup driver to use...
	I0407 12:56:12.383980 1223988 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0407 12:56:12.384108 1223988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 12:56:12.408611 1223988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0407 12:56:12.425591 1223988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0407 12:56:12.443328 1223988 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0407 12:56:12.443440 1223988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0407 12:56:12.455581 1223988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0407 12:56:12.474812 1223988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0407 12:56:12.494692 1223988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0407 12:56:12.519251 1223988 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 12:56:12.535404 1223988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0407 12:56:12.547375 1223988 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 12:56:12.560992 1223988 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 12:56:12.574462 1223988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 12:56:12.744969 1223988 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0407 12:56:12.912026 1223988 start.go:495] detecting cgroup driver to use...
	I0407 12:56:12.912111 1223988 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0407 12:56:12.912183 1223988 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0407 12:56:12.949217 1223988 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0407 12:56:12.949343 1223988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0407 12:56:12.976056 1223988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 12:56:13.017760 1223988 ssh_runner.go:195] Run: which cri-dockerd
	I0407 12:56:13.027032 1223988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0407 12:56:13.043295 1223988 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0407 12:56:13.073454 1223988 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0407 12:56:13.231174 1223988 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0407 12:56:13.382014 1223988 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0407 12:56:13.382165 1223988 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0407 12:56:13.436868 1223988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 12:56:13.598813 1223988 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0407 12:56:14.342426 1223988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0407 12:56:14.373287 1223988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0407 12:56:14.402067 1223988 out.go:235] * Preparing Kubernetes v1.20.0 on Docker 28.0.4 ...
	I0407 12:56:14.402204 1223988 cli_runner.go:164] Run: docker network inspect old-k8s-version-907855 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0407 12:56:14.420986 1223988 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0407 12:56:14.425968 1223988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 12:56:14.437461 1223988 kubeadm.go:883] updating cluster {Name:old-k8s-version-907855 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-907855 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkin
s:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0407 12:56:14.437580 1223988 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0407 12:56:14.437640 1223988 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0407 12:56:14.462870 1223988 docker.go:689] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-proxy:v1.20.0
	registry.k8s.io/kube-proxy:v1.20.0
	k8s.gcr.io/kube-controller-manager:v1.20.0
	registry.k8s.io/kube-controller-manager:v1.20.0
	registry.k8s.io/kube-apiserver:v1.20.0
	k8s.gcr.io/kube-apiserver:v1.20.0
	k8s.gcr.io/kube-scheduler:v1.20.0
	registry.k8s.io/kube-scheduler:v1.20.0
	registry.k8s.io/etcd:3.4.13-0
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	registry.k8s.io/coredns:1.7.0
	k8s.gcr.io/pause:3.2
	registry.k8s.io/pause:3.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0407 12:56:14.462897 1223988 docker.go:619] Images already preloaded, skipping extraction
	I0407 12:56:14.462968 1223988 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0407 12:56:14.483318 1223988 docker.go:689] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-proxy:v1.20.0
	registry.k8s.io/kube-proxy:v1.20.0
	registry.k8s.io/kube-apiserver:v1.20.0
	k8s.gcr.io/kube-apiserver:v1.20.0
	k8s.gcr.io/kube-controller-manager:v1.20.0
	registry.k8s.io/kube-controller-manager:v1.20.0
	k8s.gcr.io/kube-scheduler:v1.20.0
	registry.k8s.io/kube-scheduler:v1.20.0
	k8s.gcr.io/etcd:3.4.13-0
	registry.k8s.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	registry.k8s.io/coredns:1.7.0
	k8s.gcr.io/pause:3.2
	registry.k8s.io/pause:3.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0407 12:56:14.483346 1223988 cache_images.go:84] Images are preloaded, skipping loading
	I0407 12:56:14.483357 1223988 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.20.0 docker true true} ...
	I0407 12:56:14.483473 1223988 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-907855 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-907855 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0407 12:56:14.483561 1223988 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0407 12:56:14.535115 1223988 cni.go:84] Creating CNI manager for ""
	I0407 12:56:14.535145 1223988 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0407 12:56:14.535157 1223988 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0407 12:56:14.535174 1223988 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-907855 NodeName:old-k8s-version-907855 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0407 12:56:14.535318 1223988 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-907855"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0407 12:56:14.535389 1223988 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0407 12:56:14.544724 1223988 binaries.go:44] Found k8s binaries, skipping transfer
	I0407 12:56:14.544834 1223988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0407 12:56:14.553778 1223988 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0407 12:56:14.573911 1223988 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 12:56:14.593113 1223988 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2118 bytes)
	I0407 12:56:14.612036 1223988 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0407 12:56:14.615589 1223988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 12:56:14.626907 1223988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 12:56:14.710414 1223988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 12:56:14.725525 1223988 certs.go:68] Setting up /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/old-k8s-version-907855 for IP: 192.168.85.2
	I0407 12:56:14.725588 1223988 certs.go:194] generating shared ca certs ...
	I0407 12:56:14.725618 1223988 certs.go:226] acquiring lock for ca certs: {Name:mkba0a753a861c7f506d6ba219d653aabf2f5ff0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:56:14.725784 1223988 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20602-902080/.minikube/ca.key
	I0407 12:56:14.725869 1223988 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20602-902080/.minikube/proxy-client-ca.key
	I0407 12:56:14.725903 1223988 certs.go:256] generating profile certs ...
	I0407 12:56:14.726026 1223988 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/old-k8s-version-907855/client.key
	I0407 12:56:14.726111 1223988 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/old-k8s-version-907855/apiserver.key.ca3b3c8c
	I0407 12:56:14.726187 1223988 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/old-k8s-version-907855/proxy-client.key
	I0407 12:56:14.726338 1223988 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-902080/.minikube/certs/907461.pem (1338 bytes)
	W0407 12:56:14.726393 1223988 certs.go:480] ignoring /home/jenkins/minikube-integration/20602-902080/.minikube/certs/907461_empty.pem, impossibly tiny 0 bytes
	I0407 12:56:14.726419 1223988 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-902080/.minikube/certs/ca-key.pem (1675 bytes)
	I0407 12:56:14.726478 1223988 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-902080/.minikube/certs/ca.pem (1078 bytes)
	I0407 12:56:14.726528 1223988 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-902080/.minikube/certs/cert.pem (1123 bytes)
	I0407 12:56:14.726584 1223988 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-902080/.minikube/certs/key.pem (1675 bytes)
	I0407 12:56:14.726652 1223988 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-902080/.minikube/files/etc/ssl/certs/9074612.pem (1708 bytes)
	I0407 12:56:14.727323 1223988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-902080/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 12:56:14.755187 1223988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-902080/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0407 12:56:14.780905 1223988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-902080/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 12:56:14.806661 1223988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-902080/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0407 12:56:14.831957 1223988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/old-k8s-version-907855/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0407 12:56:14.856917 1223988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/old-k8s-version-907855/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0407 12:56:14.881837 1223988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/old-k8s-version-907855/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 12:56:14.911956 1223988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/old-k8s-version-907855/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0407 12:56:14.963810 1223988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-902080/.minikube/files/etc/ssl/certs/9074612.pem --> /usr/share/ca-certificates/9074612.pem (1708 bytes)
	I0407 12:56:15.000434 1223988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-902080/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 12:56:15.041904 1223988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-902080/.minikube/certs/907461.pem --> /usr/share/ca-certificates/907461.pem (1338 bytes)
	I0407 12:56:15.077645 1223988 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0407 12:56:15.101897 1223988 ssh_runner.go:195] Run: openssl version
	I0407 12:56:15.108111 1223988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9074612.pem && ln -fs /usr/share/ca-certificates/9074612.pem /etc/ssl/certs/9074612.pem"
	I0407 12:56:15.119686 1223988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9074612.pem
	I0407 12:56:15.123664 1223988 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  7 12:11 /usr/share/ca-certificates/9074612.pem
	I0407 12:56:15.123801 1223988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9074612.pem
	I0407 12:56:15.133432 1223988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9074612.pem /etc/ssl/certs/3ec20f2e.0"
	I0407 12:56:15.144667 1223988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 12:56:15.155541 1223988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 12:56:15.159554 1223988 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0407 12:56:15.159688 1223988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 12:56:15.167261 1223988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 12:56:15.180595 1223988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/907461.pem && ln -fs /usr/share/ca-certificates/907461.pem /etc/ssl/certs/907461.pem"
	I0407 12:56:15.196327 1223988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/907461.pem
	I0407 12:56:15.201611 1223988 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  7 12:11 /usr/share/ca-certificates/907461.pem
	I0407 12:56:15.201685 1223988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/907461.pem
	I0407 12:56:15.209115 1223988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/907461.pem /etc/ssl/certs/51391683.0"
	I0407 12:56:15.218615 1223988 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 12:56:15.222410 1223988 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0407 12:56:15.230274 1223988 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0407 12:56:15.238241 1223988 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0407 12:56:15.246278 1223988 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0407 12:56:15.254119 1223988 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0407 12:56:15.261463 1223988 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0407 12:56:15.268708 1223988 kubeadm.go:392] StartCluster: {Name:old-k8s-version-907855 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-907855 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:56:15.268870 1223988 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0407 12:56:15.287394 1223988 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0407 12:56:15.296833 1223988 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0407 12:56:15.296867 1223988 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0407 12:56:15.296957 1223988 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0407 12:56:15.305843 1223988 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0407 12:56:15.306415 1223988 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-907855" does not appear in /home/jenkins/minikube-integration/20602-902080/kubeconfig
	I0407 12:56:15.306671 1223988 kubeconfig.go:62] /home/jenkins/minikube-integration/20602-902080/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-907855" cluster setting kubeconfig missing "old-k8s-version-907855" context setting]
	I0407 12:56:15.307148 1223988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-902080/kubeconfig: {Name:mk5348bdf0fa2a5d213e4c9bed1510a349ce9529 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:56:15.308576 1223988 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0407 12:56:15.318480 1223988 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I0407 12:56:15.318515 1223988 kubeadm.go:597] duration metric: took 21.640233ms to restartPrimaryControlPlane
	I0407 12:56:15.318525 1223988 kubeadm.go:394] duration metric: took 49.82634ms to StartCluster
	I0407 12:56:15.318567 1223988 settings.go:142] acquiring lock: {Name:mkfee10638cabaeb5ccab0f7580cab520f4414b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:56:15.318654 1223988 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20602-902080/kubeconfig
	I0407 12:56:15.319553 1223988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-902080/kubeconfig: {Name:mk5348bdf0fa2a5d213e4c9bed1510a349ce9529 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:56:15.319813 1223988 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0407 12:56:15.320229 1223988 config.go:182] Loaded profile config "old-k8s-version-907855": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0407 12:56:15.320295 1223988 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0407 12:56:15.320411 1223988 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-907855"
	I0407 12:56:15.320449 1223988 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-907855"
	I0407 12:56:15.320483 1223988 addons.go:238] Setting addon metrics-server=true in "old-k8s-version-907855"
	W0407 12:56:15.320499 1223988 addons.go:247] addon metrics-server should already be in state true
	I0407 12:56:15.320524 1223988 host.go:66] Checking if "old-k8s-version-907855" exists ...
	I0407 12:56:15.320452 1223988 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-907855"
	W0407 12:56:15.320568 1223988 addons.go:247] addon storage-provisioner should already be in state true
	I0407 12:56:15.320618 1223988 host.go:66] Checking if "old-k8s-version-907855" exists ...
	I0407 12:56:15.321134 1223988 cli_runner.go:164] Run: docker container inspect old-k8s-version-907855 --format={{.State.Status}}
	I0407 12:56:15.321236 1223988 cli_runner.go:164] Run: docker container inspect old-k8s-version-907855 --format={{.State.Status}}
	I0407 12:56:15.320424 1223988 addons.go:69] Setting dashboard=true in profile "old-k8s-version-907855"
	I0407 12:56:15.321573 1223988 addons.go:238] Setting addon dashboard=true in "old-k8s-version-907855"
	W0407 12:56:15.321588 1223988 addons.go:247] addon dashboard should already be in state true
	I0407 12:56:15.321611 1223988 host.go:66] Checking if "old-k8s-version-907855" exists ...
	I0407 12:56:15.322050 1223988 cli_runner.go:164] Run: docker container inspect old-k8s-version-907855 --format={{.State.Status}}
	I0407 12:56:15.320414 1223988 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-907855"
	I0407 12:56:15.324548 1223988 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-907855"
	I0407 12:56:15.325098 1223988 cli_runner.go:164] Run: docker container inspect old-k8s-version-907855 --format={{.State.Status}}
	I0407 12:56:15.326540 1223988 out.go:177] * Verifying Kubernetes components...
	I0407 12:56:15.331994 1223988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 12:56:15.386335 1223988 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-907855"
	W0407 12:56:15.386366 1223988 addons.go:247] addon default-storageclass should already be in state true
	I0407 12:56:15.386393 1223988 host.go:66] Checking if "old-k8s-version-907855" exists ...
	I0407 12:56:15.391696 1223988 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0407 12:56:15.391856 1223988 cli_runner.go:164] Run: docker container inspect old-k8s-version-907855 --format={{.State.Status}}
	I0407 12:56:15.395064 1223988 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0407 12:56:15.395091 1223988 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0407 12:56:15.395162 1223988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-907855
	I0407 12:56:15.413197 1223988 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 12:56:15.420539 1223988 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 12:56:15.420566 1223988 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0407 12:56:15.420632 1223988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-907855
	I0407 12:56:15.426755 1223988 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0407 12:56:15.431977 1223988 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0407 12:56:15.440944 1223988 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0407 12:56:15.440975 1223988 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0407 12:56:15.441050 1223988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-907855
	I0407 12:56:15.441420 1223988 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34191 SSHKeyPath:/home/jenkins/minikube-integration/20602-902080/.minikube/machines/old-k8s-version-907855/id_rsa Username:docker}
	I0407 12:56:15.477164 1223988 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0407 12:56:15.477190 1223988 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0407 12:56:15.477257 1223988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-907855
	I0407 12:56:15.482695 1223988 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34191 SSHKeyPath:/home/jenkins/minikube-integration/20602-902080/.minikube/machines/old-k8s-version-907855/id_rsa Username:docker}
	I0407 12:56:15.494921 1223988 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34191 SSHKeyPath:/home/jenkins/minikube-integration/20602-902080/.minikube/machines/old-k8s-version-907855/id_rsa Username:docker}
	I0407 12:56:15.508448 1223988 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34191 SSHKeyPath:/home/jenkins/minikube-integration/20602-902080/.minikube/machines/old-k8s-version-907855/id_rsa Username:docker}
	I0407 12:56:15.532678 1223988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 12:56:15.546998 1223988 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-907855" to be "Ready" ...
	I0407 12:56:15.600400 1223988 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0407 12:56:15.600426 1223988 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0407 12:56:15.620697 1223988 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0407 12:56:15.620722 1223988 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0407 12:56:15.643917 1223988 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0407 12:56:15.643944 1223988 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0407 12:56:15.671385 1223988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0407 12:56:15.673300 1223988 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0407 12:56:15.673324 1223988 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0407 12:56:15.682345 1223988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 12:56:15.691604 1223988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0407 12:56:15.718816 1223988 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0407 12:56:15.718856 1223988 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0407 12:56:15.829202 1223988 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0407 12:56:15.829230 1223988 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0407 12:56:15.898327 1223988 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0407 12:56:15.898350 1223988 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0407 12:56:15.909413 1223988 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 12:56:15.909451 1223988 retry.go:31] will retry after 157.113023ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0407 12:56:15.909535 1223988 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 12:56:15.909567 1223988 retry.go:31] will retry after 233.142513ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0407 12:56:15.909638 1223988 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 12:56:15.909672 1223988 retry.go:31] will retry after 224.711314ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 12:56:15.922165 1223988 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0407 12:56:15.922191 1223988 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0407 12:56:15.941941 1223988 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0407 12:56:15.941975 1223988 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0407 12:56:15.961855 1223988 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0407 12:56:15.961937 1223988 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0407 12:56:15.980697 1223988 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0407 12:56:15.980720 1223988 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0407 12:56:16.001262 1223988 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0407 12:56:16.001312 1223988 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0407 12:56:16.024672 1223988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0407 12:56:16.067650 1223988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0407 12:56:16.123164 1223988 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 12:56:16.123206 1223988 retry.go:31] will retry after 338.130711ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 12:56:16.135337 1223988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0407 12:56:16.142895 1223988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0407 12:56:16.209316 1223988 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 12:56:16.209358 1223988 retry.go:31] will retry after 359.40143ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0407 12:56:16.239934 1223988 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 12:56:16.240007 1223988 retry.go:31] will retry after 434.189939ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0407 12:56:16.265343 1223988 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 12:56:16.265373 1223988 retry.go:31] will retry after 287.483577ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 12:56:16.461579 1223988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0407 12:56:16.553787 1223988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0407 12:56:16.556066 1223988 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 12:56:16.556094 1223988 retry.go:31] will retry after 204.692774ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 12:56:16.569405 1223988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0407 12:56:16.677202 1223988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0407 12:56:16.762300 1223988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0407 12:56:16.833259 1223988 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0407 12:56:16.833286 1223988 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 12:56:16.833292 1223988 retry.go:31] will retry after 689.621627ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 12:56:16.833307 1223988 retry.go:31] will retry after 526.619466ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0407 12:56:16.948698 1223988 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 12:56:16.948728 1223988 retry.go:31] will retry after 306.794579ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0407 12:56:16.982537 1223988 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 12:56:16.982574 1223988 retry.go:31] will retry after 350.566833ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 12:56:17.256520 1223988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0407 12:56:17.333878 1223988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0407 12:56:17.360105 1223988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0407 12:56:17.377327 1223988 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 12:56:17.377358 1223988 retry.go:31] will retry after 947.229726ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0407 12:56:17.513197 1223988 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 12:56:17.513229 1223988 retry.go:31] will retry after 746.966572ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 12:56:17.523521 1223988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0407 12:56:17.544400 1223988 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 12:56:17.544433 1223988 retry.go:31] will retry after 1.013715502s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 12:56:17.547980 1223988 node_ready.go:53] error getting node "old-k8s-version-907855": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-907855": dial tcp 192.168.85.2:8443: connect: connection refused
	W0407 12:56:17.609828 1223988 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 12:56:17.609858 1223988 retry.go:31] will retry after 1.182697033s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 12:56:18.261007 1223988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0407 12:56:18.325466 1223988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0407 12:56:18.403334 1223988 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 12:56:18.403371 1223988 retry.go:31] will retry after 1.500179415s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0407 12:56:18.502316 1223988 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 12:56:18.502364 1223988 retry.go:31] will retry after 1.269512345s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 12:56:18.559277 1223988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0407 12:56:18.632817 1223988 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 12:56:18.632854 1223988 retry.go:31] will retry after 962.930114ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 12:56:18.793155 1223988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0407 12:56:18.947100 1223988 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 12:56:18.947136 1223988 retry.go:31] will retry after 722.903142ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 12:56:19.596866 1223988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 12:56:19.670631 1223988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0407 12:56:19.772113 1223988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0407 12:56:19.790631 1223988 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 12:56:19.790677 1223988 retry.go:31] will retry after 2.056685062s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 12:56:19.903985 1223988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0407 12:56:21.847524 1223988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 12:56:27.523943 1223988 node_ready.go:49] node "old-k8s-version-907855" has status "Ready":"True"
	I0407 12:56:27.523965 1223988 node_ready.go:38] duration metric: took 11.976935322s for node "old-k8s-version-907855" to be "Ready" ...
	I0407 12:56:27.523976 1223988 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 12:56:27.744453 1223988 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-mmgvz" in "kube-system" namespace to be "Ready" ...
	I0407 12:56:28.301851 1223988 pod_ready.go:93] pod "coredns-74ff55c5b-mmgvz" in "kube-system" namespace has status "Ready":"True"
	I0407 12:56:28.301871 1223988 pod_ready.go:82] duration metric: took 557.338363ms for pod "coredns-74ff55c5b-mmgvz" in "kube-system" namespace to be "Ready" ...
	I0407 12:56:28.301883 1223988 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-907855" in "kube-system" namespace to be "Ready" ...
	I0407 12:56:29.938807 1223988 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (10.268132364s)
	I0407 12:56:30.327048 1223988 pod_ready.go:103] pod "etcd-old-k8s-version-907855" in "kube-system" namespace has status "Ready":"False"
	I0407 12:56:30.426982 1223988 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.65481223s)
	I0407 12:56:30.427068 1223988 addons.go:479] Verifying addon metrics-server=true in "old-k8s-version-907855"
	I0407 12:56:30.695694 1223988 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.848134113s)
	I0407 12:56:30.695808 1223988 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.791791338s)
	I0407 12:56:30.699056 1223988 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-907855 addons enable metrics-server
	
	I0407 12:56:30.702168 1223988 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	I0407 12:56:30.704728 1223988 addons.go:514] duration metric: took 15.384435551s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
	I0407 12:56:32.806370 1223988 pod_ready.go:103] pod "etcd-old-k8s-version-907855" in "kube-system" namespace has status "Ready":"False"
	I0407 12:56:34.806397 1223988 pod_ready.go:103] pod "etcd-old-k8s-version-907855" in "kube-system" namespace has status "Ready":"False"
	I0407 12:56:36.806690 1223988 pod_ready.go:103] pod "etcd-old-k8s-version-907855" in "kube-system" namespace has status "Ready":"False"
	I0407 12:56:38.807560 1223988 pod_ready.go:103] pod "etcd-old-k8s-version-907855" in "kube-system" namespace has status "Ready":"False"
	I0407 12:56:40.808473 1223988 pod_ready.go:103] pod "etcd-old-k8s-version-907855" in "kube-system" namespace has status "Ready":"False"
	I0407 12:56:43.312130 1223988 pod_ready.go:103] pod "etcd-old-k8s-version-907855" in "kube-system" namespace has status "Ready":"False"
	I0407 12:56:45.814351 1223988 pod_ready.go:103] pod "etcd-old-k8s-version-907855" in "kube-system" namespace has status "Ready":"False"
	I0407 12:56:48.333829 1223988 pod_ready.go:103] pod "etcd-old-k8s-version-907855" in "kube-system" namespace has status "Ready":"False"
	I0407 12:56:50.810284 1223988 pod_ready.go:103] pod "etcd-old-k8s-version-907855" in "kube-system" namespace has status "Ready":"False"
	I0407 12:56:52.812670 1223988 pod_ready.go:103] pod "etcd-old-k8s-version-907855" in "kube-system" namespace has status "Ready":"False"
	I0407 12:56:55.312814 1223988 pod_ready.go:103] pod "etcd-old-k8s-version-907855" in "kube-system" namespace has status "Ready":"False"
	I0407 12:56:57.356961 1223988 pod_ready.go:103] pod "etcd-old-k8s-version-907855" in "kube-system" namespace has status "Ready":"False"
	I0407 12:56:59.808735 1223988 pod_ready.go:103] pod "etcd-old-k8s-version-907855" in "kube-system" namespace has status "Ready":"False"
	I0407 12:57:02.307631 1223988 pod_ready.go:103] pod "etcd-old-k8s-version-907855" in "kube-system" namespace has status "Ready":"False"
	I0407 12:57:04.313889 1223988 pod_ready.go:103] pod "etcd-old-k8s-version-907855" in "kube-system" namespace has status "Ready":"False"
	I0407 12:57:06.807951 1223988 pod_ready.go:103] pod "etcd-old-k8s-version-907855" in "kube-system" namespace has status "Ready":"False"
	I0407 12:57:08.808610 1223988 pod_ready.go:103] pod "etcd-old-k8s-version-907855" in "kube-system" namespace has status "Ready":"False"
	I0407 12:57:11.308007 1223988 pod_ready.go:103] pod "etcd-old-k8s-version-907855" in "kube-system" namespace has status "Ready":"False"
	I0407 12:57:13.806859 1223988 pod_ready.go:103] pod "etcd-old-k8s-version-907855" in "kube-system" namespace has status "Ready":"False"
	I0407 12:57:15.807317 1223988 pod_ready.go:103] pod "etcd-old-k8s-version-907855" in "kube-system" namespace has status "Ready":"False"
	I0407 12:57:18.307703 1223988 pod_ready.go:103] pod "etcd-old-k8s-version-907855" in "kube-system" namespace has status "Ready":"False"
	I0407 12:57:20.808963 1223988 pod_ready.go:103] pod "etcd-old-k8s-version-907855" in "kube-system" namespace has status "Ready":"False"
	I0407 12:57:23.307566 1223988 pod_ready.go:103] pod "etcd-old-k8s-version-907855" in "kube-system" namespace has status "Ready":"False"
	I0407 12:57:25.807001 1223988 pod_ready.go:103] pod "etcd-old-k8s-version-907855" in "kube-system" namespace has status "Ready":"False"
	I0407 12:57:27.807286 1223988 pod_ready.go:103] pod "etcd-old-k8s-version-907855" in "kube-system" namespace has status "Ready":"False"
	I0407 12:57:29.807782 1223988 pod_ready.go:103] pod "etcd-old-k8s-version-907855" in "kube-system" namespace has status "Ready":"False"
	I0407 12:57:32.307344 1223988 pod_ready.go:103] pod "etcd-old-k8s-version-907855" in "kube-system" namespace has status "Ready":"False"
	I0407 12:57:33.806704 1223988 pod_ready.go:93] pod "etcd-old-k8s-version-907855" in "kube-system" namespace has status "Ready":"True"
	I0407 12:57:33.806728 1223988 pod_ready.go:82] duration metric: took 1m5.504838211s for pod "etcd-old-k8s-version-907855" in "kube-system" namespace to be "Ready" ...
	I0407 12:57:33.806744 1223988 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-907855" in "kube-system" namespace to be "Ready" ...
	I0407 12:57:33.810342 1223988 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-907855" in "kube-system" namespace has status "Ready":"True"
	I0407 12:57:33.810366 1223988 pod_ready.go:82] duration metric: took 3.613276ms for pod "kube-apiserver-old-k8s-version-907855" in "kube-system" namespace to be "Ready" ...
	I0407 12:57:33.810377 1223988 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-907855" in "kube-system" namespace to be "Ready" ...
	I0407 12:57:33.814048 1223988 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-907855" in "kube-system" namespace has status "Ready":"True"
	I0407 12:57:33.814072 1223988 pod_ready.go:82] duration metric: took 3.686779ms for pod "kube-controller-manager-old-k8s-version-907855" in "kube-system" namespace to be "Ready" ...
	I0407 12:57:33.814083 1223988 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qskm8" in "kube-system" namespace to be "Ready" ...
	I0407 12:57:33.817731 1223988 pod_ready.go:93] pod "kube-proxy-qskm8" in "kube-system" namespace has status "Ready":"True"
	I0407 12:57:33.817763 1223988 pod_ready.go:82] duration metric: took 3.672772ms for pod "kube-proxy-qskm8" in "kube-system" namespace to be "Ready" ...
	I0407 12:57:33.817774 1223988 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-907855" in "kube-system" namespace to be "Ready" ...
	I0407 12:57:35.823945 1223988 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-907855" in "kube-system" namespace has status "Ready":"False"
	I0407 12:57:38.322945 1223988 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-907855" in "kube-system" namespace has status "Ready":"False"
	I0407 12:57:40.324123 1223988 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-907855" in "kube-system" namespace has status "Ready":"False"
	I0407 12:57:42.823794 1223988 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-907855" in "kube-system" namespace has status "Ready":"False"
	I0407 12:57:44.823936 1223988 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-907855" in "kube-system" namespace has status "Ready":"False"
	I0407 12:57:47.329943 1223988 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-907855" in "kube-system" namespace has status "Ready":"False"
	I0407 12:57:49.823414 1223988 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-907855" in "kube-system" namespace has status "Ready":"False"
	I0407 12:57:52.323410 1223988 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-907855" in "kube-system" namespace has status "Ready":"False"
	I0407 12:57:52.822955 1223988 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-907855" in "kube-system" namespace has status "Ready":"True"
	I0407 12:57:52.822984 1223988 pod_ready.go:82] duration metric: took 19.005202353s for pod "kube-scheduler-old-k8s-version-907855" in "kube-system" namespace to be "Ready" ...
	I0407 12:57:52.822997 1223988 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace to be "Ready" ...
	I0407 12:57:54.827742 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:57:56.828698 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:57:59.329448 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:58:01.329589 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:58:03.828560 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:58:05.829314 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:58:08.331597 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:58:10.828311 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:58:12.828922 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:58:15.327906 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:58:17.328856 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:58:19.827846 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:58:21.828880 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:58:24.328568 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:58:26.329511 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:58:28.828013 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:58:30.829043 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:58:33.328950 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:58:35.332366 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:58:37.828854 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:58:40.328493 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:58:42.329072 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:58:44.828432 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:58:46.828625 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:58:48.829176 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:58:50.832292 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:58:53.328888 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:58:55.828127 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:58:57.828410 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:58:59.829113 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:59:01.829170 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:59:04.328534 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:59:06.328902 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:59:08.329525 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:59:10.828254 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:59:12.828514 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:59:15.330589 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:59:17.335339 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:59:19.828660 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:59:21.829456 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:59:24.329219 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:59:26.828654 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:59:29.328009 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:59:31.328364 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:59:33.328874 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:59:35.829024 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:59:38.329149 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:59:40.827771 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:59:42.828476 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:59:44.828712 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:59:46.829112 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:59:48.853435 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:59:51.328971 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:59:53.828703 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:59:55.828763 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 12:59:58.328328 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:00:00.361126 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:00:02.827852 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:00:04.828346 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:00:07.328474 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:00:09.829115 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:00:11.829161 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:00:14.328621 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:00:16.329849 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:00:18.829364 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:00:21.328291 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:00:23.328676 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:00:25.828573 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:00:27.828678 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:00:30.328387 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:00:32.329106 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:00:34.829387 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:00:37.328489 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:00:39.329667 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:00:41.828642 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:00:43.832393 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:00:46.329313 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:00:48.829073 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:00:51.328442 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:00:53.829523 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:00:55.831524 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:00:58.337563 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:01:00.828033 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:01:02.828226 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:01:04.829446 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:01:07.327769 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:01:09.330386 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:01:11.828673 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:01:13.830545 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:01:16.327840 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:01:18.328505 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:01:20.830350 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:01:23.329744 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:01:25.333778 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:01:27.828626 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:01:29.828909 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:01:31.829714 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:01:34.327884 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:01:36.329611 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:01:38.828978 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:01:41.329691 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:01:43.330345 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:01:45.333783 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:01:47.828644 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:01:49.829188 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:01:52.329060 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:01:52.828873 1223988 pod_ready.go:82] duration metric: took 4m0.00586178s for pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace to be "Ready" ...
	E0407 13:01:52.828894 1223988 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0407 13:01:52.828902 1223988 pod_ready.go:39] duration metric: took 5m25.304914992s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 13:01:52.828920 1223988 api_server.go:52] waiting for apiserver process to appear ...
	I0407 13:01:52.829030 1223988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0407 13:01:52.895418 1223988 logs.go:282] 2 containers: [8e9ca3cf686f 002b3321c8c9]
	I0407 13:01:52.895505 1223988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0407 13:01:52.918038 1223988 logs.go:282] 2 containers: [499bde040d37 76fcb451fd44]
	I0407 13:01:52.918123 1223988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0407 13:01:52.945768 1223988 logs.go:282] 2 containers: [e6a43a71b1f6 73c94e36d8a2]
	I0407 13:01:52.945857 1223988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0407 13:01:52.984050 1223988 logs.go:282] 2 containers: [0f335eb94cad 9715ae775fae]
	I0407 13:01:52.984217 1223988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0407 13:01:53.017976 1223988 logs.go:282] 2 containers: [02c99fe2d89e 308161cfd111]
	I0407 13:01:53.018147 1223988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0407 13:01:53.056322 1223988 logs.go:282] 2 containers: [5abec15abc05 3652c993a04e]
	I0407 13:01:53.056486 1223988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0407 13:01:53.088657 1223988 logs.go:282] 0 containers: []
	W0407 13:01:53.088701 1223988 logs.go:284] No container was found matching "kindnet"
	I0407 13:01:53.088763 1223988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0407 13:01:53.122569 1223988 logs.go:282] 1 containers: [f4034a5c5e25]
	I0407 13:01:53.122661 1223988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0407 13:01:53.172833 1223988 logs.go:282] 2 containers: [71f6bbb99341 49a236bde2cb]
	I0407 13:01:53.172879 1223988 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:01:53.172894 1223988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0407 13:01:53.492475 1223988 logs.go:123] Gathering logs for kube-controller-manager [3652c993a04e] ...
	I0407 13:01:53.492509 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3652c993a04e"
	I0407 13:01:53.583751 1223988 logs.go:123] Gathering logs for container status ...
	I0407 13:01:53.583835 1223988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:01:53.678786 1223988 logs.go:123] Gathering logs for kube-apiserver [8e9ca3cf686f] ...
	I0407 13:01:53.678867 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e9ca3cf686f"
	I0407 13:01:53.765099 1223988 logs.go:123] Gathering logs for etcd [499bde040d37] ...
	I0407 13:01:53.765183 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 499bde040d37"
	I0407 13:01:53.812738 1223988 logs.go:123] Gathering logs for etcd [76fcb451fd44] ...
	I0407 13:01:53.812854 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76fcb451fd44"
	I0407 13:01:53.863780 1223988 logs.go:123] Gathering logs for coredns [73c94e36d8a2] ...
	I0407 13:01:53.863867 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73c94e36d8a2"
	I0407 13:01:53.946570 1223988 logs.go:123] Gathering logs for kubernetes-dashboard [f4034a5c5e25] ...
	I0407 13:01:53.946647 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4034a5c5e25"
	I0407 13:01:53.987779 1223988 logs.go:123] Gathering logs for storage-provisioner [49a236bde2cb] ...
	I0407 13:01:53.987857 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a236bde2cb"
	I0407 13:01:54.023298 1223988 logs.go:123] Gathering logs for kube-apiserver [002b3321c8c9] ...
	I0407 13:01:54.023383 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 002b3321c8c9"
	I0407 13:01:54.165305 1223988 logs.go:123] Gathering logs for kube-scheduler [0f335eb94cad] ...
	I0407 13:01:54.165344 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f335eb94cad"
	I0407 13:01:54.204612 1223988 logs.go:123] Gathering logs for kube-scheduler [9715ae775fae] ...
	I0407 13:01:54.204644 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9715ae775fae"
	I0407 13:01:54.241251 1223988 logs.go:123] Gathering logs for kube-proxy [02c99fe2d89e] ...
	I0407 13:01:54.241293 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02c99fe2d89e"
	I0407 13:01:54.290685 1223988 logs.go:123] Gathering logs for kube-proxy [308161cfd111] ...
	I0407 13:01:54.290724 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 308161cfd111"
	I0407 13:01:54.338015 1223988 logs.go:123] Gathering logs for kube-controller-manager [5abec15abc05] ...
	I0407 13:01:54.338093 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5abec15abc05"
	I0407 13:01:54.436720 1223988 logs.go:123] Gathering logs for Docker ...
	I0407 13:01:54.436756 1223988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0407 13:01:54.528048 1223988 logs.go:123] Gathering logs for dmesg ...
	I0407 13:01:54.528090 1223988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:01:54.560158 1223988 logs.go:123] Gathering logs for coredns [e6a43a71b1f6] ...
	I0407 13:01:54.560183 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6a43a71b1f6"
	I0407 13:01:54.613268 1223988 logs.go:123] Gathering logs for storage-provisioner [71f6bbb99341] ...
	I0407 13:01:54.613344 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71f6bbb99341"
	I0407 13:01:54.650661 1223988 logs.go:123] Gathering logs for kubelet ...
	I0407 13:01:54.650729 1223988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0407 13:01:54.715363 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:27 old-k8s-version-907855 kubelet[1490]: E0407 12:56:27.364375    1490 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-907855" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-907855' and this object
	W0407 13:01:54.715674 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:27 old-k8s-version-907855 kubelet[1490]: E0407 12:56:27.364549    1490 reflector.go:138] object-"kube-system"/"kube-proxy-token-vfglr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-vfglr" is forbidden: User "system:node:old-k8s-version-907855" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-907855' and this object
	W0407 13:01:54.715914 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:27 old-k8s-version-907855 kubelet[1490]: E0407 12:56:27.364618    1490 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-907855" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-907855' and this object
	W0407 13:01:54.716161 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:27 old-k8s-version-907855 kubelet[1490]: E0407 12:56:27.364679    1490 reflector.go:138] object-"kube-system"/"metrics-server-token-gjsq5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-gjsq5" is forbidden: User "system:node:old-k8s-version-907855" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-907855' and this object
	W0407 13:01:54.716413 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:27 old-k8s-version-907855 kubelet[1490]: E0407 12:56:27.364737    1490 reflector.go:138] object-"kube-system"/"storage-provisioner-token-z49vs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-z49vs" is forbidden: User "system:node:old-k8s-version-907855" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-907855' and this object
	W0407 13:01:54.716651 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:27 old-k8s-version-907855 kubelet[1490]: E0407 12:56:27.371448    1490 reflector.go:138] object-"kube-system"/"coredns-token-nwrv9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-nwrv9" is forbidden: User "system:node:old-k8s-version-907855" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-907855' and this object
	W0407 13:01:54.716905 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:27 old-k8s-version-907855 kubelet[1490]: E0407 12:56:27.371503    1490 reflector.go:138] object-"default"/"default-token-ld75m": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-ld75m" is forbidden: User "system:node:old-k8s-version-907855" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-907855' and this object
	W0407 13:01:54.723581 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:30 old-k8s-version-907855 kubelet[1490]: E0407 12:56:30.148275    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0407 13:01:54.724637 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:30 old-k8s-version-907855 kubelet[1490]: E0407 12:56:30.608856    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.725415 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:31 old-k8s-version-907855 kubelet[1490]: E0407 12:56:31.653412    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.728104 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:44 old-k8s-version-907855 kubelet[1490]: E0407 12:56:44.480129    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0407 13:01:54.728476 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:48 old-k8s-version-907855 kubelet[1490]: E0407 12:56:48.210975    1490 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-6t88s": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-6t88s" is forbidden: User "system:node:old-k8s-version-907855" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-907855' and this object
	W0407 13:01:54.733541 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:56 old-k8s-version-907855 kubelet[1490]: E0407 12:56:56.515766    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0407 13:01:54.734014 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:57 old-k8s-version-907855 kubelet[1490]: E0407 12:56:57.209281    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.734229 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:57 old-k8s-version-907855 kubelet[1490]: E0407 12:56:57.418876    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.735074 1223988 logs.go:138] Found kubelet problem: Apr 07 12:57:01 old-k8s-version-907855 kubelet[1490]: E0407 12:57:01.269861    1490 pod_workers.go:191] Error syncing pod dc3ff993-a34e-429d-8975-38688893221d ("storage-provisioner_kube-system(dc3ff993-a34e-429d-8975-38688893221d)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(dc3ff993-a34e-429d-8975-38688893221d)"
	W0407 13:01:54.737281 1223988 logs.go:138] Found kubelet problem: Apr 07 12:57:08 old-k8s-version-907855 kubelet[1490]: E0407 12:57:08.473737    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0407 13:01:54.739927 1223988 logs.go:138] Found kubelet problem: Apr 07 12:57:09 old-k8s-version-907855 kubelet[1490]: E0407 12:57:09.895279    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0407 13:01:54.740297 1223988 logs.go:138] Found kubelet problem: Apr 07 12:57:19 old-k8s-version-907855 kubelet[1490]: E0407 12:57:19.417047    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.740524 1223988 logs.go:138] Found kubelet problem: Apr 07 12:57:20 old-k8s-version-907855 kubelet[1490]: E0407 12:57:20.425017    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.742872 1223988 logs.go:138] Found kubelet problem: Apr 07 12:57:32 old-k8s-version-907855 kubelet[1490]: E0407 12:57:32.874830    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0407 13:01:54.743094 1223988 logs.go:138] Found kubelet problem: Apr 07 12:57:34 old-k8s-version-907855 kubelet[1490]: E0407 12:57:34.417561    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.743394 1223988 logs.go:138] Found kubelet problem: Apr 07 12:57:45 old-k8s-version-907855 kubelet[1490]: E0407 12:57:45.417155    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.743626 1223988 logs.go:138] Found kubelet problem: Apr 07 12:57:48 old-k8s-version-907855 kubelet[1490]: E0407 12:57:48.430562    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.745789 1223988 logs.go:138] Found kubelet problem: Apr 07 12:57:57 old-k8s-version-907855 kubelet[1490]: E0407 12:57:57.442723    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0407 13:01:54.746020 1223988 logs.go:138] Found kubelet problem: Apr 07 12:58:00 old-k8s-version-907855 kubelet[1490]: E0407 12:58:00.417030    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.746234 1223988 logs.go:138] Found kubelet problem: Apr 07 12:58:09 old-k8s-version-907855 kubelet[1490]: E0407 12:58:09.416715    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.746456 1223988 logs.go:138] Found kubelet problem: Apr 07 12:58:11 old-k8s-version-907855 kubelet[1490]: E0407 12:58:11.417008    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.746675 1223988 logs.go:138] Found kubelet problem: Apr 07 12:58:23 old-k8s-version-907855 kubelet[1490]: E0407 12:58:23.416985    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.748961 1223988 logs.go:138] Found kubelet problem: Apr 07 12:58:25 old-k8s-version-907855 kubelet[1490]: E0407 12:58:25.861509    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0407 13:01:54.749173 1223988 logs.go:138] Found kubelet problem: Apr 07 12:58:36 old-k8s-version-907855 kubelet[1490]: E0407 12:58:36.417146    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.749396 1223988 logs.go:138] Found kubelet problem: Apr 07 12:58:38 old-k8s-version-907855 kubelet[1490]: E0407 12:58:38.422135    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.749605 1223988 logs.go:138] Found kubelet problem: Apr 07 12:58:48 old-k8s-version-907855 kubelet[1490]: E0407 12:58:48.427432    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.749828 1223988 logs.go:138] Found kubelet problem: Apr 07 12:58:52 old-k8s-version-907855 kubelet[1490]: E0407 12:58:52.439868    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.750038 1223988 logs.go:138] Found kubelet problem: Apr 07 12:58:59 old-k8s-version-907855 kubelet[1490]: E0407 12:58:59.417124    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.750261 1223988 logs.go:138] Found kubelet problem: Apr 07 12:59:05 old-k8s-version-907855 kubelet[1490]: E0407 12:59:05.417150    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.750493 1223988 logs.go:138] Found kubelet problem: Apr 07 12:59:13 old-k8s-version-907855 kubelet[1490]: E0407 12:59:13.417339    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.750719 1223988 logs.go:138] Found kubelet problem: Apr 07 12:59:19 old-k8s-version-907855 kubelet[1490]: E0407 12:59:19.417596    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.752888 1223988 logs.go:138] Found kubelet problem: Apr 07 12:59:27 old-k8s-version-907855 kubelet[1490]: E0407 12:59:27.433153    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0407 13:01:54.753121 1223988 logs.go:138] Found kubelet problem: Apr 07 12:59:30 old-k8s-version-907855 kubelet[1490]: E0407 12:59:30.417236    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.753331 1223988 logs.go:138] Found kubelet problem: Apr 07 12:59:42 old-k8s-version-907855 kubelet[1490]: E0407 12:59:42.417377    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.753583 1223988 logs.go:138] Found kubelet problem: Apr 07 12:59:45 old-k8s-version-907855 kubelet[1490]: E0407 12:59:45.416991    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.753812 1223988 logs.go:138] Found kubelet problem: Apr 07 12:59:53 old-k8s-version-907855 kubelet[1490]: E0407 12:59:53.417446    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.756105 1223988 logs.go:138] Found kubelet problem: Apr 07 12:59:56 old-k8s-version-907855 kubelet[1490]: E0407 12:59:56.956582    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0407 13:01:54.756342 1223988 logs.go:138] Found kubelet problem: Apr 07 13:00:06 old-k8s-version-907855 kubelet[1490]: E0407 13:00:06.417090    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.756608 1223988 logs.go:138] Found kubelet problem: Apr 07 13:00:12 old-k8s-version-907855 kubelet[1490]: E0407 13:00:12.417499    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.756835 1223988 logs.go:138] Found kubelet problem: Apr 07 13:00:21 old-k8s-version-907855 kubelet[1490]: E0407 13:00:21.417196    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.757059 1223988 logs.go:138] Found kubelet problem: Apr 07 13:00:24 old-k8s-version-907855 kubelet[1490]: E0407 13:00:24.417156    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.757268 1223988 logs.go:138] Found kubelet problem: Apr 07 13:00:35 old-k8s-version-907855 kubelet[1490]: E0407 13:00:35.417216    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.757496 1223988 logs.go:138] Found kubelet problem: Apr 07 13:00:39 old-k8s-version-907855 kubelet[1490]: E0407 13:00:39.417204    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.757716 1223988 logs.go:138] Found kubelet problem: Apr 07 13:00:46 old-k8s-version-907855 kubelet[1490]: E0407 13:00:46.418573    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.757940 1223988 logs.go:138] Found kubelet problem: Apr 07 13:00:50 old-k8s-version-907855 kubelet[1490]: E0407 13:00:50.416979    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.758150 1223988 logs.go:138] Found kubelet problem: Apr 07 13:01:01 old-k8s-version-907855 kubelet[1490]: E0407 13:01:01.417156    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.758371 1223988 logs.go:138] Found kubelet problem: Apr 07 13:01:03 old-k8s-version-907855 kubelet[1490]: E0407 13:01:03.423671    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.758581 1223988 logs.go:138] Found kubelet problem: Apr 07 13:01:14 old-k8s-version-907855 kubelet[1490]: E0407 13:01:14.418839    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.758850 1223988 logs.go:138] Found kubelet problem: Apr 07 13:01:17 old-k8s-version-907855 kubelet[1490]: E0407 13:01:17.417074    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.759063 1223988 logs.go:138] Found kubelet problem: Apr 07 13:01:28 old-k8s-version-907855 kubelet[1490]: E0407 13:01:28.417882    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.759286 1223988 logs.go:138] Found kubelet problem: Apr 07 13:01:28 old-k8s-version-907855 kubelet[1490]: E0407 13:01:28.418181    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.759495 1223988 logs.go:138] Found kubelet problem: Apr 07 13:01:41 old-k8s-version-907855 kubelet[1490]: E0407 13:01:41.417193    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.759723 1223988 logs.go:138] Found kubelet problem: Apr 07 13:01:42 old-k8s-version-907855 kubelet[1490]: E0407 13:01:42.418831    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.759934 1223988 logs.go:138] Found kubelet problem: Apr 07 13:01:53 old-k8s-version-907855 kubelet[1490]: E0407 13:01:53.423295    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0407 13:01:54.759964 1223988 out.go:358] Setting ErrFile to fd 2...
	I0407 13:01:54.759987 1223988 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0407 13:01:54.760077 1223988 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0407 13:01:54.760117 1223988 out.go:270]   Apr 07 13:01:28 old-k8s-version-907855 kubelet[1490]: E0407 13:01:28.417882    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Apr 07 13:01:28 old-k8s-version-907855 kubelet[1490]: E0407 13:01:28.417882    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.760149 1223988 out.go:270]   Apr 07 13:01:28 old-k8s-version-907855 kubelet[1490]: E0407 13:01:28.418181    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Apr 07 13:01:28 old-k8s-version-907855 kubelet[1490]: E0407 13:01:28.418181    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.760199 1223988 out.go:270]   Apr 07 13:01:41 old-k8s-version-907855 kubelet[1490]: E0407 13:01:41.417193    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Apr 07 13:01:41 old-k8s-version-907855 kubelet[1490]: E0407 13:01:41.417193    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.760230 1223988 out.go:270]   Apr 07 13:01:42 old-k8s-version-907855 kubelet[1490]: E0407 13:01:42.418831    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Apr 07 13:01:42 old-k8s-version-907855 kubelet[1490]: E0407 13:01:42.418831    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.760262 1223988 out.go:270]   Apr 07 13:01:53 old-k8s-version-907855 kubelet[1490]: E0407 13:01:53.423295    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Apr 07 13:01:53 old-k8s-version-907855 kubelet[1490]: E0407 13:01:53.423295    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0407 13:01:54.760319 1223988 out.go:358] Setting ErrFile to fd 2...
	I0407 13:01:54.760337 1223988 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:02:04.762000 1223988 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:02:04.778328 1223988 api_server.go:72] duration metric: took 5m49.458475569s to wait for apiserver process to appear ...
	I0407 13:02:04.778357 1223988 api_server.go:88] waiting for apiserver healthz status ...
	I0407 13:02:04.778437 1223988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0407 13:02:04.821300 1223988 logs.go:282] 2 containers: [8e9ca3cf686f 002b3321c8c9]
	I0407 13:02:04.821386 1223988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0407 13:02:04.857002 1223988 logs.go:282] 2 containers: [499bde040d37 76fcb451fd44]
	I0407 13:02:04.857092 1223988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0407 13:02:04.894956 1223988 logs.go:282] 2 containers: [e6a43a71b1f6 73c94e36d8a2]
	I0407 13:02:04.895044 1223988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0407 13:02:04.919064 1223988 logs.go:282] 2 containers: [0f335eb94cad 9715ae775fae]
	I0407 13:02:04.919149 1223988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0407 13:02:04.952111 1223988 logs.go:282] 2 containers: [02c99fe2d89e 308161cfd111]
	I0407 13:02:04.952200 1223988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0407 13:02:05.001073 1223988 logs.go:282] 2 containers: [5abec15abc05 3652c993a04e]
	I0407 13:02:05.001172 1223988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0407 13:02:05.040585 1223988 logs.go:282] 0 containers: []
	W0407 13:02:05.040630 1223988 logs.go:284] No container was found matching "kindnet"
	I0407 13:02:05.040711 1223988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0407 13:02:05.069987 1223988 logs.go:282] 1 containers: [f4034a5c5e25]
	I0407 13:02:05.070085 1223988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0407 13:02:05.107487 1223988 logs.go:282] 2 containers: [71f6bbb99341 49a236bde2cb]
	I0407 13:02:05.107534 1223988 logs.go:123] Gathering logs for kube-proxy [02c99fe2d89e] ...
	I0407 13:02:05.107546 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02c99fe2d89e"
	I0407 13:02:05.150377 1223988 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:02:05.150406 1223988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0407 13:02:05.432358 1223988 logs.go:123] Gathering logs for kube-apiserver [8e9ca3cf686f] ...
	I0407 13:02:05.432390 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e9ca3cf686f"
	I0407 13:02:05.519522 1223988 logs.go:123] Gathering logs for etcd [499bde040d37] ...
	I0407 13:02:05.519563 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 499bde040d37"
	I0407 13:02:05.584535 1223988 logs.go:123] Gathering logs for etcd [76fcb451fd44] ...
	I0407 13:02:05.584570 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76fcb451fd44"
	I0407 13:02:05.629466 1223988 logs.go:123] Gathering logs for coredns [e6a43a71b1f6] ...
	I0407 13:02:05.629499 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6a43a71b1f6"
	I0407 13:02:05.659764 1223988 logs.go:123] Gathering logs for coredns [73c94e36d8a2] ...
	I0407 13:02:05.659798 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73c94e36d8a2"
	I0407 13:02:05.698989 1223988 logs.go:123] Gathering logs for kube-scheduler [0f335eb94cad] ...
	I0407 13:02:05.699018 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f335eb94cad"
	I0407 13:02:05.731708 1223988 logs.go:123] Gathering logs for kube-apiserver [002b3321c8c9] ...
	I0407 13:02:05.731778 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 002b3321c8c9"
	I0407 13:02:05.878737 1223988 logs.go:123] Gathering logs for kube-scheduler [9715ae775fae] ...
	I0407 13:02:05.878813 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9715ae775fae"
	I0407 13:02:05.909034 1223988 logs.go:123] Gathering logs for kube-controller-manager [3652c993a04e] ...
	I0407 13:02:05.909105 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3652c993a04e"
	I0407 13:02:05.964339 1223988 logs.go:123] Gathering logs for kubernetes-dashboard [f4034a5c5e25] ...
	I0407 13:02:05.964418 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4034a5c5e25"
	I0407 13:02:05.994884 1223988 logs.go:123] Gathering logs for Docker ...
	I0407 13:02:05.994956 1223988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0407 13:02:06.062540 1223988 logs.go:123] Gathering logs for container status ...
	I0407 13:02:06.062608 1223988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:02:06.188493 1223988 logs.go:123] Gathering logs for dmesg ...
	I0407 13:02:06.188612 1223988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:02:06.216307 1223988 logs.go:123] Gathering logs for kubelet ...
	I0407 13:02:06.216376 1223988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0407 13:02:06.294580 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:27 old-k8s-version-907855 kubelet[1490]: E0407 12:56:27.364375    1490 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-907855" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-907855' and this object
	W0407 13:02:06.294840 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:27 old-k8s-version-907855 kubelet[1490]: E0407 12:56:27.364549    1490 reflector.go:138] object-"kube-system"/"kube-proxy-token-vfglr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-vfglr" is forbidden: User "system:node:old-k8s-version-907855" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-907855' and this object
	W0407 13:02:06.295051 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:27 old-k8s-version-907855 kubelet[1490]: E0407 12:56:27.364618    1490 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-907855" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-907855' and this object
	W0407 13:02:06.295275 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:27 old-k8s-version-907855 kubelet[1490]: E0407 12:56:27.364679    1490 reflector.go:138] object-"kube-system"/"metrics-server-token-gjsq5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-gjsq5" is forbidden: User "system:node:old-k8s-version-907855" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-907855' and this object
	W0407 13:02:06.295503 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:27 old-k8s-version-907855 kubelet[1490]: E0407 12:56:27.364737    1490 reflector.go:138] object-"kube-system"/"storage-provisioner-token-z49vs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-z49vs" is forbidden: User "system:node:old-k8s-version-907855" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-907855' and this object
	W0407 13:02:06.296700 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:27 old-k8s-version-907855 kubelet[1490]: E0407 12:56:27.371448    1490 reflector.go:138] object-"kube-system"/"coredns-token-nwrv9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-nwrv9" is forbidden: User "system:node:old-k8s-version-907855" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-907855' and this object
	W0407 13:02:06.296995 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:27 old-k8s-version-907855 kubelet[1490]: E0407 12:56:27.371503    1490 reflector.go:138] object-"default"/"default-token-ld75m": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-ld75m" is forbidden: User "system:node:old-k8s-version-907855" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-907855' and this object
	W0407 13:02:06.303834 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:30 old-k8s-version-907855 kubelet[1490]: E0407 12:56:30.148275    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0407 13:02:06.304821 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:30 old-k8s-version-907855 kubelet[1490]: E0407 12:56:30.608856    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.305574 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:31 old-k8s-version-907855 kubelet[1490]: E0407 12:56:31.653412    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.308360 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:44 old-k8s-version-907855 kubelet[1490]: E0407 12:56:44.480129    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0407 13:02:06.308907 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:48 old-k8s-version-907855 kubelet[1490]: E0407 12:56:48.210975    1490 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-6t88s": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-6t88s" is forbidden: User "system:node:old-k8s-version-907855" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-907855' and this object
	W0407 13:02:06.313970 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:56 old-k8s-version-907855 kubelet[1490]: E0407 12:56:56.515766    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0407 13:02:06.314363 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:57 old-k8s-version-907855 kubelet[1490]: E0407 12:56:57.209281    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.314553 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:57 old-k8s-version-907855 kubelet[1490]: E0407 12:56:57.418876    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.315328 1223988 logs.go:138] Found kubelet problem: Apr 07 12:57:01 old-k8s-version-907855 kubelet[1490]: E0407 12:57:01.269861    1490 pod_workers.go:191] Error syncing pod dc3ff993-a34e-429d-8975-38688893221d ("storage-provisioner_kube-system(dc3ff993-a34e-429d-8975-38688893221d)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(dc3ff993-a34e-429d-8975-38688893221d)"
	W0407 13:02:06.317500 1223988 logs.go:138] Found kubelet problem: Apr 07 12:57:08 old-k8s-version-907855 kubelet[1490]: E0407 12:57:08.473737    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0407 13:02:06.320290 1223988 logs.go:138] Found kubelet problem: Apr 07 12:57:09 old-k8s-version-907855 kubelet[1490]: E0407 12:57:09.895279    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0407 13:02:06.320661 1223988 logs.go:138] Found kubelet problem: Apr 07 12:57:19 old-k8s-version-907855 kubelet[1490]: E0407 12:57:19.417047    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.320876 1223988 logs.go:138] Found kubelet problem: Apr 07 12:57:20 old-k8s-version-907855 kubelet[1490]: E0407 12:57:20.425017    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.323182 1223988 logs.go:138] Found kubelet problem: Apr 07 12:57:32 old-k8s-version-907855 kubelet[1490]: E0407 12:57:32.874830    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0407 13:02:06.323372 1223988 logs.go:138] Found kubelet problem: Apr 07 12:57:34 old-k8s-version-907855 kubelet[1490]: E0407 12:57:34.417561    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.323559 1223988 logs.go:138] Found kubelet problem: Apr 07 12:57:45 old-k8s-version-907855 kubelet[1490]: E0407 12:57:45.417155    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.323792 1223988 logs.go:138] Found kubelet problem: Apr 07 12:57:48 old-k8s-version-907855 kubelet[1490]: E0407 12:57:48.430562    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.325868 1223988 logs.go:138] Found kubelet problem: Apr 07 12:57:57 old-k8s-version-907855 kubelet[1490]: E0407 12:57:57.442723    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0407 13:02:06.326067 1223988 logs.go:138] Found kubelet problem: Apr 07 12:58:00 old-k8s-version-907855 kubelet[1490]: E0407 12:58:00.417030    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.326253 1223988 logs.go:138] Found kubelet problem: Apr 07 12:58:09 old-k8s-version-907855 kubelet[1490]: E0407 12:58:09.416715    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.326472 1223988 logs.go:138] Found kubelet problem: Apr 07 12:58:11 old-k8s-version-907855 kubelet[1490]: E0407 12:58:11.417008    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.326674 1223988 logs.go:138] Found kubelet problem: Apr 07 12:58:23 old-k8s-version-907855 kubelet[1490]: E0407 12:58:23.416985    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.328950 1223988 logs.go:138] Found kubelet problem: Apr 07 12:58:25 old-k8s-version-907855 kubelet[1490]: E0407 12:58:25.861509    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0407 13:02:06.329139 1223988 logs.go:138] Found kubelet problem: Apr 07 12:58:36 old-k8s-version-907855 kubelet[1490]: E0407 12:58:36.417146    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.329337 1223988 logs.go:138] Found kubelet problem: Apr 07 12:58:38 old-k8s-version-907855 kubelet[1490]: E0407 12:58:38.422135    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.329522 1223988 logs.go:138] Found kubelet problem: Apr 07 12:58:48 old-k8s-version-907855 kubelet[1490]: E0407 12:58:48.427432    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.329721 1223988 logs.go:138] Found kubelet problem: Apr 07 12:58:52 old-k8s-version-907855 kubelet[1490]: E0407 12:58:52.439868    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.329934 1223988 logs.go:138] Found kubelet problem: Apr 07 12:58:59 old-k8s-version-907855 kubelet[1490]: E0407 12:58:59.417124    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.330135 1223988 logs.go:138] Found kubelet problem: Apr 07 12:59:05 old-k8s-version-907855 kubelet[1490]: E0407 12:59:05.417150    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.330319 1223988 logs.go:138] Found kubelet problem: Apr 07 12:59:13 old-k8s-version-907855 kubelet[1490]: E0407 12:59:13.417339    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.330546 1223988 logs.go:138] Found kubelet problem: Apr 07 12:59:19 old-k8s-version-907855 kubelet[1490]: E0407 12:59:19.417596    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.332724 1223988 logs.go:138] Found kubelet problem: Apr 07 12:59:27 old-k8s-version-907855 kubelet[1490]: E0407 12:59:27.433153    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0407 13:02:06.332935 1223988 logs.go:138] Found kubelet problem: Apr 07 12:59:30 old-k8s-version-907855 kubelet[1490]: E0407 12:59:30.417236    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.333123 1223988 logs.go:138] Found kubelet problem: Apr 07 12:59:42 old-k8s-version-907855 kubelet[1490]: E0407 12:59:42.417377    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.333322 1223988 logs.go:138] Found kubelet problem: Apr 07 12:59:45 old-k8s-version-907855 kubelet[1490]: E0407 12:59:45.416991    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.333508 1223988 logs.go:138] Found kubelet problem: Apr 07 12:59:53 old-k8s-version-907855 kubelet[1490]: E0407 12:59:53.417446    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.335771 1223988 logs.go:138] Found kubelet problem: Apr 07 12:59:56 old-k8s-version-907855 kubelet[1490]: E0407 12:59:56.956582    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0407 13:02:06.335959 1223988 logs.go:138] Found kubelet problem: Apr 07 13:00:06 old-k8s-version-907855 kubelet[1490]: E0407 13:00:06.417090    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.336156 1223988 logs.go:138] Found kubelet problem: Apr 07 13:00:12 old-k8s-version-907855 kubelet[1490]: E0407 13:00:12.417499    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.336340 1223988 logs.go:138] Found kubelet problem: Apr 07 13:00:21 old-k8s-version-907855 kubelet[1490]: E0407 13:00:21.417196    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.336571 1223988 logs.go:138] Found kubelet problem: Apr 07 13:00:24 old-k8s-version-907855 kubelet[1490]: E0407 13:00:24.417156    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.336809 1223988 logs.go:138] Found kubelet problem: Apr 07 13:00:35 old-k8s-version-907855 kubelet[1490]: E0407 13:00:35.417216    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.337012 1223988 logs.go:138] Found kubelet problem: Apr 07 13:00:39 old-k8s-version-907855 kubelet[1490]: E0407 13:00:39.417204    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.337199 1223988 logs.go:138] Found kubelet problem: Apr 07 13:00:46 old-k8s-version-907855 kubelet[1490]: E0407 13:00:46.418573    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.337396 1223988 logs.go:138] Found kubelet problem: Apr 07 13:00:50 old-k8s-version-907855 kubelet[1490]: E0407 13:00:50.416979    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.337580 1223988 logs.go:138] Found kubelet problem: Apr 07 13:01:01 old-k8s-version-907855 kubelet[1490]: E0407 13:01:01.417156    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.337776 1223988 logs.go:138] Found kubelet problem: Apr 07 13:01:03 old-k8s-version-907855 kubelet[1490]: E0407 13:01:03.423671    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.337961 1223988 logs.go:138] Found kubelet problem: Apr 07 13:01:14 old-k8s-version-907855 kubelet[1490]: E0407 13:01:14.418839    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.338173 1223988 logs.go:138] Found kubelet problem: Apr 07 13:01:17 old-k8s-version-907855 kubelet[1490]: E0407 13:01:17.417074    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.338375 1223988 logs.go:138] Found kubelet problem: Apr 07 13:01:28 old-k8s-version-907855 kubelet[1490]: E0407 13:01:28.417882    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.338573 1223988 logs.go:138] Found kubelet problem: Apr 07 13:01:28 old-k8s-version-907855 kubelet[1490]: E0407 13:01:28.418181    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.338764 1223988 logs.go:138] Found kubelet problem: Apr 07 13:01:41 old-k8s-version-907855 kubelet[1490]: E0407 13:01:41.417193    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.338962 1223988 logs.go:138] Found kubelet problem: Apr 07 13:01:42 old-k8s-version-907855 kubelet[1490]: E0407 13:01:42.418831    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.339172 1223988 logs.go:138] Found kubelet problem: Apr 07 13:01:53 old-k8s-version-907855 kubelet[1490]: E0407 13:01:53.423295    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.339373 1223988 logs.go:138] Found kubelet problem: Apr 07 13:01:57 old-k8s-version-907855 kubelet[1490]: E0407 13:01:57.417250    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.339557 1223988 logs.go:138] Found kubelet problem: Apr 07 13:02:05 old-k8s-version-907855 kubelet[1490]: E0407 13:02:05.421584    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0407 13:02:06.339569 1223988 logs.go:123] Gathering logs for kube-proxy [308161cfd111] ...
	I0407 13:02:06.339584 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 308161cfd111"
	I0407 13:02:06.377827 1223988 logs.go:123] Gathering logs for kube-controller-manager [5abec15abc05] ...
	I0407 13:02:06.377853 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5abec15abc05"
	I0407 13:02:06.460822 1223988 logs.go:123] Gathering logs for storage-provisioner [71f6bbb99341] ...
	I0407 13:02:06.463251 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71f6bbb99341"
	I0407 13:02:06.503868 1223988 logs.go:123] Gathering logs for storage-provisioner [49a236bde2cb] ...
	I0407 13:02:06.503940 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a236bde2cb"
	I0407 13:02:06.528210 1223988 out.go:358] Setting ErrFile to fd 2...
	I0407 13:02:06.528231 1223988 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0407 13:02:06.528274 1223988 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0407 13:02:06.528285 1223988 out.go:270]   Apr 07 13:01:41 old-k8s-version-907855 kubelet[1490]: E0407 13:01:41.417193    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Apr 07 13:01:41 old-k8s-version-907855 kubelet[1490]: E0407 13:01:41.417193    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.528296 1223988 out.go:270]   Apr 07 13:01:42 old-k8s-version-907855 kubelet[1490]: E0407 13:01:42.418831    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Apr 07 13:01:42 old-k8s-version-907855 kubelet[1490]: E0407 13:01:42.418831    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.528304 1223988 out.go:270]   Apr 07 13:01:53 old-k8s-version-907855 kubelet[1490]: E0407 13:01:53.423295    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Apr 07 13:01:53 old-k8s-version-907855 kubelet[1490]: E0407 13:01:53.423295    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.528309 1223988 out.go:270]   Apr 07 13:01:57 old-k8s-version-907855 kubelet[1490]: E0407 13:01:57.417250    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Apr 07 13:01:57 old-k8s-version-907855 kubelet[1490]: E0407 13:01:57.417250    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.528314 1223988 out.go:270]   Apr 07 13:02:05 old-k8s-version-907855 kubelet[1490]: E0407 13:02:05.421584    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Apr 07 13:02:05 old-k8s-version-907855 kubelet[1490]: E0407 13:02:05.421584    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0407 13:02:06.528324 1223988 out.go:358] Setting ErrFile to fd 2...
	I0407 13:02:06.528330 1223988 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:02:16.529750 1223988 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0407 13:02:16.539005 1223988 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0407 13:02:16.542065 1223988 out.go:201] 
	W0407 13:02:16.545157 1223988 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0407 13:02:16.545250 1223988 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0407 13:02:16.545294 1223988 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0407 13:02:16.545328 1223988 out.go:270] * 
	* 
	W0407 13:02:16.546231 1223988 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0407 13:02:16.550123 1223988 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-907855 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-907855
helpers_test.go:235: (dbg) docker inspect old-k8s-version-907855:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8948e2a603f280628de070853fcc1c5394150286fab87140292c0cd4e16ca440",
	        "Created": "2025-04-07T12:53:20.058253252Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1224133,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-04-07T12:56:06.454882988Z",
	            "FinishedAt": "2025-04-07T12:56:05.369126924Z"
	        },
	        "Image": "sha256:1a97cd9e9bbab266425b883d3ed87fee4969302ed9a49ce4df4bf460f6bbf404",
	        "ResolvConfPath": "/var/lib/docker/containers/8948e2a603f280628de070853fcc1c5394150286fab87140292c0cd4e16ca440/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8948e2a603f280628de070853fcc1c5394150286fab87140292c0cd4e16ca440/hostname",
	        "HostsPath": "/var/lib/docker/containers/8948e2a603f280628de070853fcc1c5394150286fab87140292c0cd4e16ca440/hosts",
	        "LogPath": "/var/lib/docker/containers/8948e2a603f280628de070853fcc1c5394150286fab87140292c0cd4e16ca440/8948e2a603f280628de070853fcc1c5394150286fab87140292c0cd4e16ca440-json.log",
	        "Name": "/old-k8s-version-907855",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-907855:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-907855",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8948e2a603f280628de070853fcc1c5394150286fab87140292c0cd4e16ca440",
	                "LowerDir": "/var/lib/docker/overlay2/8677ec66035f5133b981eb8484815f1b0a32b8ebecd2b47de6cea17c205ec737-init/diff:/var/lib/docker/overlay2/62463113c498faf5e7eec9d872ce62a2a6bdf87c3c0d9f9a8582d8f051c10606/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8677ec66035f5133b981eb8484815f1b0a32b8ebecd2b47de6cea17c205ec737/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8677ec66035f5133b981eb8484815f1b0a32b8ebecd2b47de6cea17c205ec737/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8677ec66035f5133b981eb8484815f1b0a32b8ebecd2b47de6cea17c205ec737/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-907855",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-907855/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-907855",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-907855",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-907855",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f1610bfd8a3bd008f1722ce750e71557837f42294ef4ad7c57637aa49dc5cb3a",
	            "SandboxKey": "/var/run/docker/netns/f1610bfd8a3b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34191"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34192"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34195"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34193"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34194"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-907855": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6a:c2:4d:47:1b:7a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0787eecb8d63967f7f42d7267e9acde725f2ebba548df6b8fe1a87632a1ccef8",
	                    "EndpointID": "93a860199294516f485931aada0ff2d3be342b77cf49a0d44ff409a5180c6d37",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-907855",
	                        "8948e2a603f2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-907855 -n old-k8s-version-907855
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-907855 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-907855 logs -n 25: (1.47064848s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | docker-flags-367418 ssh                                | docker-flags-367418    | jenkins | v1.35.0 | 07 Apr 25 12:52 UTC | 07 Apr 25 12:52 UTC |
	|         | sudo systemctl show docker                             |                        |         |         |                     |                     |
	|         | --property=Environment                                 |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | docker-flags-367418 ssh                                | docker-flags-367418    | jenkins | v1.35.0 | 07 Apr 25 12:52 UTC | 07 Apr 25 12:52 UTC |
	|         | sudo systemctl show docker                             |                        |         |         |                     |                     |
	|         | --property=ExecStart                                   |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| delete  | -p docker-flags-367418                                 | docker-flags-367418    | jenkins | v1.35.0 | 07 Apr 25 12:52 UTC | 07 Apr 25 12:52 UTC |
	| start   | -p cert-options-736687                                 | cert-options-736687    | jenkins | v1.35.0 | 07 Apr 25 12:52 UTC | 07 Apr 25 12:53 UTC |
	|         | --memory=2048                                          |                        |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                        |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                        |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                        |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                        |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --container-runtime=docker                             |                        |         |         |                     |                     |
	| ssh     | cert-options-736687 ssh                                | cert-options-736687    | jenkins | v1.35.0 | 07 Apr 25 12:53 UTC | 07 Apr 25 12:53 UTC |
	|         | openssl x509 -text -noout -in                          |                        |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                        |         |         |                     |                     |
	| ssh     | -p cert-options-736687 -- sudo                         | cert-options-736687    | jenkins | v1.35.0 | 07 Apr 25 12:53 UTC | 07 Apr 25 12:53 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                        |         |         |                     |                     |
	| delete  | -p cert-options-736687                                 | cert-options-736687    | jenkins | v1.35.0 | 07 Apr 25 12:53 UTC | 07 Apr 25 12:53 UTC |
	| start   | -p old-k8s-version-907855                              | old-k8s-version-907855 | jenkins | v1.35.0 | 07 Apr 25 12:53 UTC | 07 Apr 25 12:55 UTC |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --container-runtime=docker                             |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                        |         |         |                     |                     |
	| start   | -p cert-expiration-550861                              | cert-expiration-550861 | jenkins | v1.35.0 | 07 Apr 25 12:54 UTC | 07 Apr 25 12:55 UTC |
	|         | --memory=2048                                          |                        |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --container-runtime=docker                             |                        |         |         |                     |                     |
	| delete  | -p cert-expiration-550861                              | cert-expiration-550861 | jenkins | v1.35.0 | 07 Apr 25 12:55 UTC | 07 Apr 25 12:55 UTC |
	| start   | -p no-preload-302149                                   | no-preload-302149      | jenkins | v1.35.0 | 07 Apr 25 12:55 UTC | 07 Apr 25 12:56 UTC |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr                                      |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --container-runtime=docker                             |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-907855        | old-k8s-version-907855 | jenkins | v1.35.0 | 07 Apr 25 12:55 UTC | 07 Apr 25 12:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-907855                              | old-k8s-version-907855 | jenkins | v1.35.0 | 07 Apr 25 12:55 UTC | 07 Apr 25 12:56 UTC |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-907855             | old-k8s-version-907855 | jenkins | v1.35.0 | 07 Apr 25 12:56 UTC | 07 Apr 25 12:56 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-907855                              | old-k8s-version-907855 | jenkins | v1.35.0 | 07 Apr 25 12:56 UTC |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --container-runtime=docker                             |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-302149             | no-preload-302149      | jenkins | v1.35.0 | 07 Apr 25 12:56 UTC | 07 Apr 25 12:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p no-preload-302149                                   | no-preload-302149      | jenkins | v1.35.0 | 07 Apr 25 12:56 UTC | 07 Apr 25 12:56 UTC |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-302149                  | no-preload-302149      | jenkins | v1.35.0 | 07 Apr 25 12:56 UTC | 07 Apr 25 12:56 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p no-preload-302149                                   | no-preload-302149      | jenkins | v1.35.0 | 07 Apr 25 12:56 UTC | 07 Apr 25 13:01 UTC |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr                                      |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --container-runtime=docker                             |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                        |         |         |                     |                     |
	| image   | no-preload-302149 image list                           | no-preload-302149      | jenkins | v1.35.0 | 07 Apr 25 13:01 UTC | 07 Apr 25 13:01 UTC |
	|         | --format=json                                          |                        |         |         |                     |                     |
	| pause   | -p no-preload-302149                                   | no-preload-302149      | jenkins | v1.35.0 | 07 Apr 25 13:01 UTC | 07 Apr 25 13:01 UTC |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| unpause | -p no-preload-302149                                   | no-preload-302149      | jenkins | v1.35.0 | 07 Apr 25 13:01 UTC | 07 Apr 25 13:01 UTC |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| delete  | -p no-preload-302149                                   | no-preload-302149      | jenkins | v1.35.0 | 07 Apr 25 13:01 UTC | 07 Apr 25 13:01 UTC |
	| delete  | -p no-preload-302149                                   | no-preload-302149      | jenkins | v1.35.0 | 07 Apr 25 13:01 UTC | 07 Apr 25 13:01 UTC |
	| start   | -p embed-certs-717935                                  | embed-certs-717935     | jenkins | v1.35.0 | 07 Apr 25 13:01 UTC | 07 Apr 25 13:02 UTC |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                        |         |         |                     |                     |
	|         |  --container-runtime=docker                            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                        |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 13:01:26
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 13:01:26.480504 1238390 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:01:26.481005 1238390 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:01:26.481024 1238390 out.go:358] Setting ErrFile to fd 2...
	I0407 13:01:26.481088 1238390 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:01:26.481440 1238390 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-902080/.minikube/bin
	I0407 13:01:26.481924 1238390 out.go:352] Setting JSON to false
	I0407 13:01:26.482948 1238390 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":17031,"bootTime":1744013856,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0407 13:01:26.483019 1238390 start.go:139] virtualization:  
	I0407 13:01:26.486862 1238390 out.go:177] * [embed-certs-717935] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0407 13:01:26.491202 1238390 out.go:177]   - MINIKUBE_LOCATION=20602
	I0407 13:01:26.491265 1238390 notify.go:220] Checking for updates...
	I0407 13:01:26.498275 1238390 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 13:01:26.501209 1238390 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20602-902080/kubeconfig
	I0407 13:01:26.504577 1238390 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-902080/.minikube
	I0407 13:01:26.507501 1238390 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0407 13:01:26.510464 1238390 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 13:01:26.513926 1238390 config.go:182] Loaded profile config "old-k8s-version-907855": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0407 13:01:26.514041 1238390 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 13:01:26.541142 1238390 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0407 13:01:26.541277 1238390 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 13:01:26.612449 1238390 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-04-07 13:01:26.599131329 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0407 13:01:26.612563 1238390 docker.go:318] overlay module found
	I0407 13:01:26.615862 1238390 out.go:177] * Using the docker driver based on user configuration
	I0407 13:01:26.618856 1238390 start.go:297] selected driver: docker
	I0407 13:01:26.618882 1238390 start.go:901] validating driver "docker" against <nil>
	I0407 13:01:26.618897 1238390 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 13:01:26.619713 1238390 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 13:01:26.676062 1238390 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-04-07 13:01:26.665910686 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0407 13:01:26.676221 1238390 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0407 13:01:26.676443 1238390 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 13:01:26.679621 1238390 out.go:177] * Using Docker driver with root privileges
	I0407 13:01:26.682579 1238390 cni.go:84] Creating CNI manager for ""
	I0407 13:01:26.682660 1238390 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0407 13:01:26.682675 1238390 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0407 13:01:26.682763 1238390 start.go:340] cluster config:
	{Name:embed-certs-717935 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-717935 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:01:26.685790 1238390 out.go:177] * Starting "embed-certs-717935" primary control-plane node in "embed-certs-717935" cluster
	I0407 13:01:26.688641 1238390 cache.go:121] Beginning downloading kic base image for docker with docker
	I0407 13:01:26.691723 1238390 out.go:177] * Pulling base image v0.0.46-1743675393-20591 ...
	I0407 13:01:26.694553 1238390 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0407 13:01:26.694640 1238390 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20602-902080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-arm64.tar.lz4
	I0407 13:01:26.694654 1238390 cache.go:56] Caching tarball of preloaded images
	I0407 13:01:26.694762 1238390 preload.go:172] Found /home/jenkins/minikube-integration/20602-902080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0407 13:01:26.694777 1238390 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0407 13:01:26.694881 1238390 profile.go:143] Saving config to /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/embed-certs-717935/config.json ...
	I0407 13:01:26.694905 1238390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/embed-certs-717935/config.json: {Name:mkc2026bd0add1f2bfc44659eeb524ec1f282c73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:01:26.694554 1238390 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local docker daemon
	I0407 13:01:26.715720 1238390 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local docker daemon, skipping pull
	I0407 13:01:26.715742 1238390 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 exists in daemon, skipping load
	I0407 13:01:26.715762 1238390 cache.go:230] Successfully downloaded all kic artifacts
	I0407 13:01:26.715805 1238390 start.go:360] acquireMachinesLock for embed-certs-717935: {Name:mk3d0c47c549d6c534fd6153dd9099be8cc609c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 13:01:26.715930 1238390 start.go:364] duration metric: took 107.62µs to acquireMachinesLock for "embed-certs-717935"
	I0407 13:01:26.715958 1238390 start.go:93] Provisioning new machine with config: &{Name:embed-certs-717935 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-717935 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0407 13:01:26.716042 1238390 start.go:125] createHost starting for "" (driver="docker")
	I0407 13:01:27.828626 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:01:29.828909 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:01:26.721352 1238390 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0407 13:01:26.721676 1238390 start.go:159] libmachine.API.Create for "embed-certs-717935" (driver="docker")
	I0407 13:01:26.721720 1238390 client.go:168] LocalClient.Create starting
	I0407 13:01:26.721797 1238390 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20602-902080/.minikube/certs/ca.pem
	I0407 13:01:26.721835 1238390 main.go:141] libmachine: Decoding PEM data...
	I0407 13:01:26.721858 1238390 main.go:141] libmachine: Parsing certificate...
	I0407 13:01:26.721921 1238390 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20602-902080/.minikube/certs/cert.pem
	I0407 13:01:26.721944 1238390 main.go:141] libmachine: Decoding PEM data...
	I0407 13:01:26.721954 1238390 main.go:141] libmachine: Parsing certificate...
	I0407 13:01:26.722315 1238390 cli_runner.go:164] Run: docker network inspect embed-certs-717935 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0407 13:01:26.740092 1238390 cli_runner.go:211] docker network inspect embed-certs-717935 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0407 13:01:26.740177 1238390 network_create.go:284] running [docker network inspect embed-certs-717935] to gather additional debugging logs...
	I0407 13:01:26.740199 1238390 cli_runner.go:164] Run: docker network inspect embed-certs-717935
	W0407 13:01:26.757179 1238390 cli_runner.go:211] docker network inspect embed-certs-717935 returned with exit code 1
	I0407 13:01:26.757217 1238390 network_create.go:287] error running [docker network inspect embed-certs-717935]: docker network inspect embed-certs-717935: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-717935 not found
	I0407 13:01:26.757232 1238390 network_create.go:289] output of [docker network inspect embed-certs-717935]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-717935 not found
	
	** /stderr **
	I0407 13:01:26.757348 1238390 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0407 13:01:26.772616 1238390 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ac0706c6046e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1a:7e:5d:ac:03:df} reservation:<nil>}
	I0407 13:01:26.772971 1238390 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-39dbe0d4d216 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:3d:8a:4c:44:33} reservation:<nil>}
	I0407 13:01:26.773329 1238390 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-383a69fdb1af IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:82:cb:30:36:cf:2f} reservation:<nil>}
	I0407 13:01:26.773772 1238390 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019b1300}
	I0407 13:01:26.773795 1238390 network_create.go:124] attempt to create docker network embed-certs-717935 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0407 13:01:26.773873 1238390 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-717935 embed-certs-717935
	I0407 13:01:26.861580 1238390 network_create.go:108] docker network embed-certs-717935 192.168.76.0/24 created
	I0407 13:01:26.861611 1238390 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-717935" container
	I0407 13:01:26.861686 1238390 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0407 13:01:26.878413 1238390 cli_runner.go:164] Run: docker volume create embed-certs-717935 --label name.minikube.sigs.k8s.io=embed-certs-717935 --label created_by.minikube.sigs.k8s.io=true
	I0407 13:01:26.897630 1238390 oci.go:103] Successfully created a docker volume embed-certs-717935
	I0407 13:01:26.897713 1238390 cli_runner.go:164] Run: docker run --rm --name embed-certs-717935-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-717935 --entrypoint /usr/bin/test -v embed-certs-717935:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 -d /var/lib
	I0407 13:01:27.413775 1238390 oci.go:107] Successfully prepared a docker volume embed-certs-717935
	I0407 13:01:27.413831 1238390 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0407 13:01:27.413851 1238390 kic.go:194] Starting extracting preloaded images to volume ...
	I0407 13:01:27.413928 1238390 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20602-902080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-717935:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 -I lz4 -xf /preloaded.tar -C /extractDir
	I0407 13:01:31.829714 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:01:34.327884 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:01:31.749605 1238390 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20602-902080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-717935:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 -I lz4 -xf /preloaded.tar -C /extractDir: (4.335637513s)
	I0407 13:01:31.749658 1238390 kic.go:203] duration metric: took 4.335786872s to extract preloaded images to volume ...
	W0407 13:01:31.749816 1238390 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0407 13:01:31.749935 1238390 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0407 13:01:31.805374 1238390 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-717935 --name embed-certs-717935 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-717935 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-717935 --network embed-certs-717935 --ip 192.168.76.2 --volume embed-certs-717935:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727
	I0407 13:01:32.125542 1238390 cli_runner.go:164] Run: docker container inspect embed-certs-717935 --format={{.State.Running}}
	I0407 13:01:32.145234 1238390 cli_runner.go:164] Run: docker container inspect embed-certs-717935 --format={{.State.Status}}
	I0407 13:01:32.171413 1238390 cli_runner.go:164] Run: docker exec embed-certs-717935 stat /var/lib/dpkg/alternatives/iptables
	I0407 13:01:32.224449 1238390 oci.go:144] the created container "embed-certs-717935" has a running status.
	I0407 13:01:32.224480 1238390 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20602-902080/.minikube/machines/embed-certs-717935/id_rsa...
	I0407 13:01:32.600752 1238390 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20602-902080/.minikube/machines/embed-certs-717935/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0407 13:01:32.624342 1238390 cli_runner.go:164] Run: docker container inspect embed-certs-717935 --format={{.State.Status}}
	I0407 13:01:32.649578 1238390 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0407 13:01:32.649597 1238390 kic_runner.go:114] Args: [docker exec --privileged embed-certs-717935 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0407 13:01:32.726528 1238390 cli_runner.go:164] Run: docker container inspect embed-certs-717935 --format={{.State.Status}}
	I0407 13:01:32.762350 1238390 machine.go:93] provisionDockerMachine start ...
	I0407 13:01:32.762437 1238390 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-717935
	I0407 13:01:32.797467 1238390 main.go:141] libmachine: Using SSH client type: native
	I0407 13:01:32.797811 1238390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 34201 <nil> <nil>}
	I0407 13:01:32.797822 1238390 main.go:141] libmachine: About to run SSH command:
	hostname
	I0407 13:01:32.798490 1238390 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:32906->127.0.0.1:34201: read: connection reset by peer
	I0407 13:01:35.924462 1238390 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-717935
	
	I0407 13:01:35.924540 1238390 ubuntu.go:169] provisioning hostname "embed-certs-717935"
	I0407 13:01:35.924646 1238390 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-717935
	I0407 13:01:35.942854 1238390 main.go:141] libmachine: Using SSH client type: native
	I0407 13:01:35.943153 1238390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 34201 <nil> <nil>}
	I0407 13:01:35.943165 1238390 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-717935 && echo "embed-certs-717935" | sudo tee /etc/hostname
	I0407 13:01:36.087591 1238390 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-717935
	
	I0407 13:01:36.087673 1238390 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-717935
	I0407 13:01:36.106990 1238390 main.go:141] libmachine: Using SSH client type: native
	I0407 13:01:36.107294 1238390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 34201 <nil> <nil>}
	I0407 13:01:36.107316 1238390 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-717935' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-717935/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-717935' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 13:01:36.229064 1238390 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 13:01:36.229099 1238390 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20602-902080/.minikube CaCertPath:/home/jenkins/minikube-integration/20602-902080/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20602-902080/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20602-902080/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20602-902080/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20602-902080/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20602-902080/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20602-902080/.minikube}
	I0407 13:01:36.229158 1238390 ubuntu.go:177] setting up certificates
	I0407 13:01:36.229170 1238390 provision.go:84] configureAuth start
	I0407 13:01:36.229242 1238390 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-717935
	I0407 13:01:36.246159 1238390 provision.go:143] copyHostCerts
	I0407 13:01:36.246232 1238390 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-902080/.minikube/key.pem, removing ...
	I0407 13:01:36.246248 1238390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-902080/.minikube/key.pem
	I0407 13:01:36.246326 1238390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-902080/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20602-902080/.minikube/key.pem (1675 bytes)
	I0407 13:01:36.246416 1238390 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-902080/.minikube/ca.pem, removing ...
	I0407 13:01:36.246426 1238390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-902080/.minikube/ca.pem
	I0407 13:01:36.246453 1238390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-902080/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20602-902080/.minikube/ca.pem (1078 bytes)
	I0407 13:01:36.246541 1238390 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-902080/.minikube/cert.pem, removing ...
	I0407 13:01:36.246550 1238390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-902080/.minikube/cert.pem
	I0407 13:01:36.246575 1238390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-902080/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20602-902080/.minikube/cert.pem (1123 bytes)
	I0407 13:01:36.246626 1238390 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20602-902080/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20602-902080/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20602-902080/.minikube/certs/ca-key.pem org=jenkins.embed-certs-717935 san=[127.0.0.1 192.168.76.2 embed-certs-717935 localhost minikube]
	I0407 13:01:36.329611 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:01:38.828978 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:01:37.626584 1238390 provision.go:177] copyRemoteCerts
	I0407 13:01:37.626661 1238390 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 13:01:37.626717 1238390 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-717935
	I0407 13:01:37.644567 1238390 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34201 SSHKeyPath:/home/jenkins/minikube-integration/20602-902080/.minikube/machines/embed-certs-717935/id_rsa Username:docker}
	I0407 13:01:37.734661 1238390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-902080/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 13:01:37.760609 1238390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-902080/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0407 13:01:37.786718 1238390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-902080/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0407 13:01:37.811712 1238390 provision.go:87] duration metric: took 1.582521657s to configureAuth
	I0407 13:01:37.811743 1238390 ubuntu.go:193] setting minikube options for container-runtime
	I0407 13:01:37.811980 1238390 config.go:182] Loaded profile config "embed-certs-717935": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 13:01:37.812046 1238390 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-717935
	I0407 13:01:37.835317 1238390 main.go:141] libmachine: Using SSH client type: native
	I0407 13:01:37.835720 1238390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 34201 <nil> <nil>}
	I0407 13:01:37.835745 1238390 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0407 13:01:37.957260 1238390 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0407 13:01:37.957284 1238390 ubuntu.go:71] root file system type: overlay
	I0407 13:01:37.957388 1238390 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0407 13:01:37.957472 1238390 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-717935
	I0407 13:01:37.975532 1238390 main.go:141] libmachine: Using SSH client type: native
	I0407 13:01:37.975845 1238390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 34201 <nil> <nil>}
	I0407 13:01:37.975927 1238390 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0407 13:01:38.118066 1238390 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0407 13:01:38.118177 1238390 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-717935
	I0407 13:01:38.137970 1238390 main.go:141] libmachine: Using SSH client type: native
	I0407 13:01:38.138277 1238390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 34201 <nil> <nil>}
	I0407 13:01:38.138299 1238390 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0407 13:01:39.017043 1238390 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-03-25 15:05:41.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-04-07 13:01:38.110408864 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0407 13:01:39.017077 1238390 machine.go:96] duration metric: took 6.254707425s to provisionDockerMachine
	I0407 13:01:39.017090 1238390 client.go:171] duration metric: took 12.295361246s to LocalClient.Create
	I0407 13:01:39.017104 1238390 start.go:167] duration metric: took 12.295430522s to libmachine.API.Create "embed-certs-717935"
	I0407 13:01:39.017112 1238390 start.go:293] postStartSetup for "embed-certs-717935" (driver="docker")
	I0407 13:01:39.017122 1238390 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 13:01:39.017195 1238390 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 13:01:39.017245 1238390 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-717935
	I0407 13:01:39.045770 1238390 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34201 SSHKeyPath:/home/jenkins/minikube-integration/20602-902080/.minikube/machines/embed-certs-717935/id_rsa Username:docker}
	I0407 13:01:39.139297 1238390 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 13:01:39.142739 1238390 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0407 13:01:39.142776 1238390 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0407 13:01:39.142789 1238390 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0407 13:01:39.142797 1238390 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0407 13:01:39.142813 1238390 filesync.go:126] Scanning /home/jenkins/minikube-integration/20602-902080/.minikube/addons for local assets ...
	I0407 13:01:39.142875 1238390 filesync.go:126] Scanning /home/jenkins/minikube-integration/20602-902080/.minikube/files for local assets ...
	I0407 13:01:39.142959 1238390 filesync.go:149] local asset: /home/jenkins/minikube-integration/20602-902080/.minikube/files/etc/ssl/certs/9074612.pem -> 9074612.pem in /etc/ssl/certs
	I0407 13:01:39.143075 1238390 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0407 13:01:39.152035 1238390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-902080/.minikube/files/etc/ssl/certs/9074612.pem --> /etc/ssl/certs/9074612.pem (1708 bytes)
	I0407 13:01:39.178071 1238390 start.go:296] duration metric: took 160.943735ms for postStartSetup
	I0407 13:01:39.178512 1238390 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-717935
	I0407 13:01:39.196511 1238390 profile.go:143] Saving config to /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/embed-certs-717935/config.json ...
	I0407 13:01:39.196912 1238390 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 13:01:39.196978 1238390 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-717935
	I0407 13:01:39.213996 1238390 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34201 SSHKeyPath:/home/jenkins/minikube-integration/20602-902080/.minikube/machines/embed-certs-717935/id_rsa Username:docker}
	I0407 13:01:39.302125 1238390 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0407 13:01:39.306865 1238390 start.go:128] duration metric: took 12.590807184s to createHost
	I0407 13:01:39.306887 1238390 start.go:83] releasing machines lock for "embed-certs-717935", held for 12.590948372s
	I0407 13:01:39.306962 1238390 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-717935
	I0407 13:01:39.324302 1238390 ssh_runner.go:195] Run: cat /version.json
	I0407 13:01:39.324377 1238390 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-717935
	I0407 13:01:39.324302 1238390 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0407 13:01:39.324513 1238390 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-717935
	I0407 13:01:39.347837 1238390 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34201 SSHKeyPath:/home/jenkins/minikube-integration/20602-902080/.minikube/machines/embed-certs-717935/id_rsa Username:docker}
	I0407 13:01:39.364921 1238390 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34201 SSHKeyPath:/home/jenkins/minikube-integration/20602-902080/.minikube/machines/embed-certs-717935/id_rsa Username:docker}
	I0407 13:01:39.573746 1238390 ssh_runner.go:195] Run: systemctl --version
	I0407 13:01:39.580884 1238390 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0407 13:01:39.587223 1238390 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0407 13:01:39.615765 1238390 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0407 13:01:39.615889 1238390 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 13:01:39.651602 1238390 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0407 13:01:39.651677 1238390 start.go:495] detecting cgroup driver to use...
	I0407 13:01:39.651726 1238390 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0407 13:01:39.651844 1238390 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 13:01:39.669566 1238390 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0407 13:01:39.682199 1238390 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0407 13:01:39.694533 1238390 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0407 13:01:39.694621 1238390 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0407 13:01:39.705001 1238390 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0407 13:01:39.715326 1238390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0407 13:01:39.731479 1238390 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0407 13:01:39.742881 1238390 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 13:01:39.754236 1238390 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0407 13:01:39.765795 1238390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0407 13:01:39.776069 1238390 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0407 13:01:39.797348 1238390 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 13:01:39.812025 1238390 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 13:01:39.827397 1238390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:01:39.953840 1238390 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0407 13:01:40.062307 1238390 start.go:495] detecting cgroup driver to use...
	I0407 13:01:40.062373 1238390 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0407 13:01:40.062436 1238390 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0407 13:01:40.089587 1238390 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0407 13:01:40.089662 1238390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0407 13:01:40.105680 1238390 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 13:01:40.127954 1238390 ssh_runner.go:195] Run: which cri-dockerd
	I0407 13:01:40.134693 1238390 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0407 13:01:40.145424 1238390 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0407 13:01:40.170587 1238390 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0407 13:01:40.283779 1238390 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0407 13:01:40.403663 1238390 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0407 13:01:40.403769 1238390 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0407 13:01:40.422994 1238390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:01:40.540937 1238390 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0407 13:01:40.909468 1238390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0407 13:01:40.922133 1238390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0407 13:01:40.935055 1238390 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0407 13:01:41.020757 1238390 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0407 13:01:41.118696 1238390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:01:41.212561 1238390 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0407 13:01:41.227944 1238390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0407 13:01:41.239844 1238390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:01:41.335174 1238390 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0407 13:01:41.440208 1238390 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0407 13:01:41.440331 1238390 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0407 13:01:41.444727 1238390 start.go:563] Will wait 60s for crictl version
	I0407 13:01:41.444812 1238390 ssh_runner.go:195] Run: which crictl
	I0407 13:01:41.448444 1238390 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0407 13:01:41.494083 1238390 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.0.4
	RuntimeApiVersion:  v1
	I0407 13:01:41.494154 1238390 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0407 13:01:41.520622 1238390 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0407 13:01:41.547580 1238390 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 28.0.4 ...
	I0407 13:01:41.547715 1238390 cli_runner.go:164] Run: docker network inspect embed-certs-717935 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0407 13:01:41.564297 1238390 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0407 13:01:41.567889 1238390 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:01:41.579123 1238390 kubeadm.go:883] updating cluster {Name:embed-certs-717935 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-717935 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0407 13:01:41.579240 1238390 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0407 13:01:41.579312 1238390 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0407 13:01:41.597591 1238390 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.2
	registry.k8s.io/kube-scheduler:v1.32.2
	registry.k8s.io/kube-controller-manager:v1.32.2
	registry.k8s.io/kube-proxy:v1.32.2
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0407 13:01:41.597611 1238390 docker.go:619] Images already preloaded, skipping extraction
	I0407 13:01:41.597674 1238390 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0407 13:01:41.621754 1238390 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.2
	registry.k8s.io/kube-controller-manager:v1.32.2
	registry.k8s.io/kube-scheduler:v1.32.2
	registry.k8s.io/kube-proxy:v1.32.2
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0407 13:01:41.621780 1238390 cache_images.go:84] Images are preloaded, skipping loading
	I0407 13:01:41.621791 1238390 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.32.2 docker true true} ...
	I0407 13:01:41.621880 1238390 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-717935 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:embed-certs-717935 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0407 13:01:41.621949 1238390 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0407 13:01:41.675444 1238390 cni.go:84] Creating CNI manager for ""
	I0407 13:01:41.675469 1238390 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0407 13:01:41.675479 1238390 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0407 13:01:41.675498 1238390 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-717935 NodeName:embed-certs-717935 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0407 13:01:41.675638 1238390 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "embed-certs-717935"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0407 13:01:41.675708 1238390 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0407 13:01:41.685019 1238390 binaries.go:44] Found k8s binaries, skipping transfer
	I0407 13:01:41.685086 1238390 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0407 13:01:41.693631 1238390 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0407 13:01:41.711929 1238390 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 13:01:41.729819 1238390 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2296 bytes)
	I0407 13:01:41.750856 1238390 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0407 13:01:41.755129 1238390 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:01:41.766256 1238390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:01:41.861607 1238390 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 13:01:41.878526 1238390 certs.go:68] Setting up /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/embed-certs-717935 for IP: 192.168.76.2
	I0407 13:01:41.878553 1238390 certs.go:194] generating shared ca certs ...
	I0407 13:01:41.878568 1238390 certs.go:226] acquiring lock for ca certs: {Name:mkba0a753a861c7f506d6ba219d653aabf2f5ff0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:01:41.878706 1238390 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20602-902080/.minikube/ca.key
	I0407 13:01:41.878760 1238390 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20602-902080/.minikube/proxy-client-ca.key
	I0407 13:01:41.878774 1238390 certs.go:256] generating profile certs ...
	I0407 13:01:41.878829 1238390 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/embed-certs-717935/client.key
	I0407 13:01:41.878851 1238390 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/embed-certs-717935/client.crt with IP's: []
	I0407 13:01:42.041371 1238390 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/embed-certs-717935/client.crt ...
	I0407 13:01:42.041405 1238390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/embed-certs-717935/client.crt: {Name:mk9c7494b0212a4b49111896c0a89d81ac150fc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:01:42.041615 1238390 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/embed-certs-717935/client.key ...
	I0407 13:01:42.041630 1238390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/embed-certs-717935/client.key: {Name:mkf694f991e6bf0bb7f645b8feab8642e521b08c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:01:42.041727 1238390 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/embed-certs-717935/apiserver.key.77696a67
	I0407 13:01:42.041743 1238390 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/embed-certs-717935/apiserver.crt.77696a67 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0407 13:01:42.527988 1238390 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/embed-certs-717935/apiserver.crt.77696a67 ...
	I0407 13:01:42.528021 1238390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/embed-certs-717935/apiserver.crt.77696a67: {Name:mk4261d06a348c831f932a0d0da57fd173278818 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:01:42.528220 1238390 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/embed-certs-717935/apiserver.key.77696a67 ...
	I0407 13:01:42.528243 1238390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/embed-certs-717935/apiserver.key.77696a67: {Name:mk37ad41e546eafc63b910f0085b62a219229603 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:01:42.528339 1238390 certs.go:381] copying /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/embed-certs-717935/apiserver.crt.77696a67 -> /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/embed-certs-717935/apiserver.crt
	I0407 13:01:42.528422 1238390 certs.go:385] copying /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/embed-certs-717935/apiserver.key.77696a67 -> /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/embed-certs-717935/apiserver.key
	I0407 13:01:42.528481 1238390 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/embed-certs-717935/proxy-client.key
	I0407 13:01:42.528499 1238390 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/embed-certs-717935/proxy-client.crt with IP's: []
	I0407 13:01:43.817667 1238390 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/embed-certs-717935/proxy-client.crt ...
	I0407 13:01:43.817700 1238390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/embed-certs-717935/proxy-client.crt: {Name:mk1d58bf9c9383f2f042d20a79b25a6abd0db1a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:01:43.817896 1238390 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/embed-certs-717935/proxy-client.key ...
	I0407 13:01:43.817914 1238390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/embed-certs-717935/proxy-client.key: {Name:mke0f79a37c45b20c205db7e87eff611800180c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:01:43.818105 1238390 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-902080/.minikube/certs/907461.pem (1338 bytes)
	W0407 13:01:43.818148 1238390 certs.go:480] ignoring /home/jenkins/minikube-integration/20602-902080/.minikube/certs/907461_empty.pem, impossibly tiny 0 bytes
	I0407 13:01:43.818158 1238390 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-902080/.minikube/certs/ca-key.pem (1675 bytes)
	I0407 13:01:43.818180 1238390 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-902080/.minikube/certs/ca.pem (1078 bytes)
	I0407 13:01:43.818207 1238390 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-902080/.minikube/certs/cert.pem (1123 bytes)
	I0407 13:01:43.818232 1238390 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-902080/.minikube/certs/key.pem (1675 bytes)
	I0407 13:01:43.818286 1238390 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-902080/.minikube/files/etc/ssl/certs/9074612.pem (1708 bytes)
	I0407 13:01:43.818844 1238390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-902080/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 13:01:43.844036 1238390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-902080/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0407 13:01:43.872722 1238390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-902080/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 13:01:43.898013 1238390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-902080/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0407 13:01:43.923316 1238390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/embed-certs-717935/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0407 13:01:43.948613 1238390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/embed-certs-717935/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0407 13:01:43.973460 1238390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/embed-certs-717935/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 13:01:43.998683 1238390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/embed-certs-717935/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0407 13:01:44.026935 1238390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-902080/.minikube/certs/907461.pem --> /usr/share/ca-certificates/907461.pem (1338 bytes)
	I0407 13:01:44.053597 1238390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-902080/.minikube/files/etc/ssl/certs/9074612.pem --> /usr/share/ca-certificates/9074612.pem (1708 bytes)
	I0407 13:01:44.080252 1238390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-902080/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 13:01:44.114088 1238390 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0407 13:01:44.136924 1238390 ssh_runner.go:195] Run: openssl version
	I0407 13:01:44.143268 1238390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9074612.pem && ln -fs /usr/share/ca-certificates/9074612.pem /etc/ssl/certs/9074612.pem"
	I0407 13:01:44.158165 1238390 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9074612.pem
	I0407 13:01:44.161728 1238390 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  7 12:11 /usr/share/ca-certificates/9074612.pem
	I0407 13:01:44.161817 1238390 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9074612.pem
	I0407 13:01:44.171029 1238390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9074612.pem /etc/ssl/certs/3ec20f2e.0"
	I0407 13:01:44.180449 1238390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 13:01:44.189810 1238390 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:01:44.193592 1238390 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:04 /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:01:44.193675 1238390 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:01:44.200979 1238390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 13:01:44.211118 1238390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/907461.pem && ln -fs /usr/share/ca-certificates/907461.pem /etc/ssl/certs/907461.pem"
	I0407 13:01:44.222264 1238390 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/907461.pem
	I0407 13:01:44.226409 1238390 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  7 12:11 /usr/share/ca-certificates/907461.pem
	I0407 13:01:44.226502 1238390 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/907461.pem
	I0407 13:01:44.233756 1238390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/907461.pem /etc/ssl/certs/51391683.0"
	I0407 13:01:44.243177 1238390 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 13:01:44.246493 1238390 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0407 13:01:44.246549 1238390 kubeadm.go:392] StartCluster: {Name:embed-certs-717935 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-717935 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:01:44.246665 1238390 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0407 13:01:44.263811 1238390 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0407 13:01:44.273001 1238390 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0407 13:01:44.281619 1238390 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0407 13:01:44.281733 1238390 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 13:01:44.293512 1238390 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 13:01:44.293532 1238390 kubeadm.go:157] found existing configuration files:
	
	I0407 13:01:44.293589 1238390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 13:01:44.302454 1238390 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 13:01:44.302553 1238390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0407 13:01:44.311136 1238390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 13:01:44.319801 1238390 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 13:01:44.319865 1238390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0407 13:01:44.329941 1238390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 13:01:44.338725 1238390 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 13:01:44.338805 1238390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 13:01:44.348153 1238390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 13:01:44.357261 1238390 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 13:01:44.357358 1238390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 13:01:44.365914 1238390 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0407 13:01:44.406163 1238390 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0407 13:01:44.406224 1238390 kubeadm.go:310] [preflight] Running pre-flight checks
	I0407 13:01:44.440044 1238390 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0407 13:01:44.440120 1238390 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1081-aws
	I0407 13:01:44.440161 1238390 kubeadm.go:310] OS: Linux
	I0407 13:01:44.440211 1238390 kubeadm.go:310] CGROUPS_CPU: enabled
	I0407 13:01:44.440264 1238390 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0407 13:01:44.440326 1238390 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0407 13:01:44.440378 1238390 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0407 13:01:44.440430 1238390 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0407 13:01:44.440489 1238390 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0407 13:01:44.440538 1238390 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0407 13:01:44.440590 1238390 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0407 13:01:44.440658 1238390 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0407 13:01:44.523493 1238390 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0407 13:01:44.523607 1238390 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0407 13:01:44.523702 1238390 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0407 13:01:44.551974 1238390 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0407 13:01:41.329691 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:01:43.330345 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:01:45.333783 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:01:44.558368 1238390 out.go:235]   - Generating certificates and keys ...
	I0407 13:01:44.558481 1238390 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0407 13:01:44.558553 1238390 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0407 13:01:44.996680 1238390 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0407 13:01:45.366793 1238390 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0407 13:01:45.543025 1238390 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0407 13:01:45.930085 1238390 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0407 13:01:46.331476 1238390 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0407 13:01:46.331759 1238390 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-717935 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0407 13:01:46.676544 1238390 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0407 13:01:46.676972 1238390 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-717935 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0407 13:01:47.360913 1238390 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0407 13:01:47.531933 1238390 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0407 13:01:47.800839 1238390 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0407 13:01:47.801144 1238390 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0407 13:01:48.152120 1238390 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0407 13:01:48.362894 1238390 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0407 13:01:48.703710 1238390 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0407 13:01:48.878411 1238390 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0407 13:01:49.297746 1238390 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0407 13:01:49.298372 1238390 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0407 13:01:49.301279 1238390 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0407 13:01:47.828644 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:01:49.829188 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:01:49.305005 1238390 out.go:235]   - Booting up control plane ...
	I0407 13:01:49.305100 1238390 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0407 13:01:49.305176 1238390 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0407 13:01:49.305654 1238390 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0407 13:01:49.335600 1238390 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0407 13:01:49.342203 1238390 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0407 13:01:49.342257 1238390 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0407 13:01:49.460007 1238390 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0407 13:01:49.463694 1238390 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0407 13:01:50.965529 1238390 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501885806s
	I0407 13:01:50.965617 1238390 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0407 13:01:52.329060 1223988 pod_ready.go:103] pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace has status "Ready":"False"
	I0407 13:01:52.828873 1223988 pod_ready.go:82] duration metric: took 4m0.00586178s for pod "metrics-server-9975d5f86-hpzkf" in "kube-system" namespace to be "Ready" ...
	E0407 13:01:52.828894 1223988 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0407 13:01:52.828902 1223988 pod_ready.go:39] duration metric: took 5m25.304914992s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 13:01:52.828920 1223988 api_server.go:52] waiting for apiserver process to appear ...
	I0407 13:01:52.829030 1223988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0407 13:01:52.895418 1223988 logs.go:282] 2 containers: [8e9ca3cf686f 002b3321c8c9]
	I0407 13:01:52.895505 1223988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0407 13:01:52.918038 1223988 logs.go:282] 2 containers: [499bde040d37 76fcb451fd44]
	I0407 13:01:52.918123 1223988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0407 13:01:52.945768 1223988 logs.go:282] 2 containers: [e6a43a71b1f6 73c94e36d8a2]
	I0407 13:01:52.945857 1223988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0407 13:01:52.984050 1223988 logs.go:282] 2 containers: [0f335eb94cad 9715ae775fae]
	I0407 13:01:52.984217 1223988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0407 13:01:53.017976 1223988 logs.go:282] 2 containers: [02c99fe2d89e 308161cfd111]
	I0407 13:01:53.018147 1223988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0407 13:01:53.056322 1223988 logs.go:282] 2 containers: [5abec15abc05 3652c993a04e]
	I0407 13:01:53.056486 1223988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0407 13:01:53.088657 1223988 logs.go:282] 0 containers: []
	W0407 13:01:53.088701 1223988 logs.go:284] No container was found matching "kindnet"
	I0407 13:01:53.088763 1223988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0407 13:01:53.122569 1223988 logs.go:282] 1 containers: [f4034a5c5e25]
	I0407 13:01:53.122661 1223988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0407 13:01:53.172833 1223988 logs.go:282] 2 containers: [71f6bbb99341 49a236bde2cb]
	I0407 13:01:53.172879 1223988 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:01:53.172894 1223988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0407 13:01:53.492475 1223988 logs.go:123] Gathering logs for kube-controller-manager [3652c993a04e] ...
	I0407 13:01:53.492509 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3652c993a04e"
	I0407 13:01:53.583751 1223988 logs.go:123] Gathering logs for container status ...
	I0407 13:01:53.583835 1223988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:01:53.678786 1223988 logs.go:123] Gathering logs for kube-apiserver [8e9ca3cf686f] ...
	I0407 13:01:53.678867 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e9ca3cf686f"
	I0407 13:01:53.765099 1223988 logs.go:123] Gathering logs for etcd [499bde040d37] ...
	I0407 13:01:53.765183 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 499bde040d37"
	I0407 13:01:53.812738 1223988 logs.go:123] Gathering logs for etcd [76fcb451fd44] ...
	I0407 13:01:53.812854 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76fcb451fd44"
	I0407 13:01:53.863780 1223988 logs.go:123] Gathering logs for coredns [73c94e36d8a2] ...
	I0407 13:01:53.863867 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73c94e36d8a2"
	I0407 13:01:53.946570 1223988 logs.go:123] Gathering logs for kubernetes-dashboard [f4034a5c5e25] ...
	I0407 13:01:53.946647 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4034a5c5e25"
	I0407 13:01:53.987779 1223988 logs.go:123] Gathering logs for storage-provisioner [49a236bde2cb] ...
	I0407 13:01:53.987857 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a236bde2cb"
	I0407 13:01:54.023298 1223988 logs.go:123] Gathering logs for kube-apiserver [002b3321c8c9] ...
	I0407 13:01:54.023383 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 002b3321c8c9"
	I0407 13:01:54.165305 1223988 logs.go:123] Gathering logs for kube-scheduler [0f335eb94cad] ...
	I0407 13:01:54.165344 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f335eb94cad"
	I0407 13:01:54.204612 1223988 logs.go:123] Gathering logs for kube-scheduler [9715ae775fae] ...
	I0407 13:01:54.204644 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9715ae775fae"
	I0407 13:01:54.241251 1223988 logs.go:123] Gathering logs for kube-proxy [02c99fe2d89e] ...
	I0407 13:01:54.241293 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02c99fe2d89e"
	I0407 13:01:54.290685 1223988 logs.go:123] Gathering logs for kube-proxy [308161cfd111] ...
	I0407 13:01:54.290724 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 308161cfd111"
	I0407 13:01:54.338015 1223988 logs.go:123] Gathering logs for kube-controller-manager [5abec15abc05] ...
	I0407 13:01:54.338093 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5abec15abc05"
	I0407 13:01:54.436720 1223988 logs.go:123] Gathering logs for Docker ...
	I0407 13:01:54.436756 1223988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0407 13:01:54.528048 1223988 logs.go:123] Gathering logs for dmesg ...
	I0407 13:01:54.528090 1223988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:01:54.560158 1223988 logs.go:123] Gathering logs for coredns [e6a43a71b1f6] ...
	I0407 13:01:54.560183 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6a43a71b1f6"
	I0407 13:01:54.613268 1223988 logs.go:123] Gathering logs for storage-provisioner [71f6bbb99341] ...
	I0407 13:01:54.613344 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71f6bbb99341"
	I0407 13:01:54.650661 1223988 logs.go:123] Gathering logs for kubelet ...
	I0407 13:01:54.650729 1223988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0407 13:01:54.715363 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:27 old-k8s-version-907855 kubelet[1490]: E0407 12:56:27.364375    1490 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-907855" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-907855' and this object
	W0407 13:01:54.715674 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:27 old-k8s-version-907855 kubelet[1490]: E0407 12:56:27.364549    1490 reflector.go:138] object-"kube-system"/"kube-proxy-token-vfglr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-vfglr" is forbidden: User "system:node:old-k8s-version-907855" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-907855' and this object
	W0407 13:01:54.715914 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:27 old-k8s-version-907855 kubelet[1490]: E0407 12:56:27.364618    1490 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-907855" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-907855' and this object
	W0407 13:01:54.716161 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:27 old-k8s-version-907855 kubelet[1490]: E0407 12:56:27.364679    1490 reflector.go:138] object-"kube-system"/"metrics-server-token-gjsq5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-gjsq5" is forbidden: User "system:node:old-k8s-version-907855" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-907855' and this object
	W0407 13:01:54.716413 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:27 old-k8s-version-907855 kubelet[1490]: E0407 12:56:27.364737    1490 reflector.go:138] object-"kube-system"/"storage-provisioner-token-z49vs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-z49vs" is forbidden: User "system:node:old-k8s-version-907855" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-907855' and this object
	W0407 13:01:54.716651 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:27 old-k8s-version-907855 kubelet[1490]: E0407 12:56:27.371448    1490 reflector.go:138] object-"kube-system"/"coredns-token-nwrv9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-nwrv9" is forbidden: User "system:node:old-k8s-version-907855" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-907855' and this object
	W0407 13:01:54.716905 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:27 old-k8s-version-907855 kubelet[1490]: E0407 12:56:27.371503    1490 reflector.go:138] object-"default"/"default-token-ld75m": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-ld75m" is forbidden: User "system:node:old-k8s-version-907855" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-907855' and this object
	W0407 13:01:54.723581 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:30 old-k8s-version-907855 kubelet[1490]: E0407 12:56:30.148275    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0407 13:01:54.724637 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:30 old-k8s-version-907855 kubelet[1490]: E0407 12:56:30.608856    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.725415 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:31 old-k8s-version-907855 kubelet[1490]: E0407 12:56:31.653412    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.728104 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:44 old-k8s-version-907855 kubelet[1490]: E0407 12:56:44.480129    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0407 13:01:54.728476 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:48 old-k8s-version-907855 kubelet[1490]: E0407 12:56:48.210975    1490 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-6t88s": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-6t88s" is forbidden: User "system:node:old-k8s-version-907855" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-907855' and this object
	W0407 13:01:54.733541 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:56 old-k8s-version-907855 kubelet[1490]: E0407 12:56:56.515766    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0407 13:01:54.734014 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:57 old-k8s-version-907855 kubelet[1490]: E0407 12:56:57.209281    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.734229 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:57 old-k8s-version-907855 kubelet[1490]: E0407 12:56:57.418876    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.735074 1223988 logs.go:138] Found kubelet problem: Apr 07 12:57:01 old-k8s-version-907855 kubelet[1490]: E0407 12:57:01.269861    1490 pod_workers.go:191] Error syncing pod dc3ff993-a34e-429d-8975-38688893221d ("storage-provisioner_kube-system(dc3ff993-a34e-429d-8975-38688893221d)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(dc3ff993-a34e-429d-8975-38688893221d)"
	W0407 13:01:54.737281 1223988 logs.go:138] Found kubelet problem: Apr 07 12:57:08 old-k8s-version-907855 kubelet[1490]: E0407 12:57:08.473737    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0407 13:01:54.739927 1223988 logs.go:138] Found kubelet problem: Apr 07 12:57:09 old-k8s-version-907855 kubelet[1490]: E0407 12:57:09.895279    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0407 13:01:54.740297 1223988 logs.go:138] Found kubelet problem: Apr 07 12:57:19 old-k8s-version-907855 kubelet[1490]: E0407 12:57:19.417047    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.740524 1223988 logs.go:138] Found kubelet problem: Apr 07 12:57:20 old-k8s-version-907855 kubelet[1490]: E0407 12:57:20.425017    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.742872 1223988 logs.go:138] Found kubelet problem: Apr 07 12:57:32 old-k8s-version-907855 kubelet[1490]: E0407 12:57:32.874830    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0407 13:01:54.743094 1223988 logs.go:138] Found kubelet problem: Apr 07 12:57:34 old-k8s-version-907855 kubelet[1490]: E0407 12:57:34.417561    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.743394 1223988 logs.go:138] Found kubelet problem: Apr 07 12:57:45 old-k8s-version-907855 kubelet[1490]: E0407 12:57:45.417155    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.743626 1223988 logs.go:138] Found kubelet problem: Apr 07 12:57:48 old-k8s-version-907855 kubelet[1490]: E0407 12:57:48.430562    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.745789 1223988 logs.go:138] Found kubelet problem: Apr 07 12:57:57 old-k8s-version-907855 kubelet[1490]: E0407 12:57:57.442723    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0407 13:01:54.746020 1223988 logs.go:138] Found kubelet problem: Apr 07 12:58:00 old-k8s-version-907855 kubelet[1490]: E0407 12:58:00.417030    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.746234 1223988 logs.go:138] Found kubelet problem: Apr 07 12:58:09 old-k8s-version-907855 kubelet[1490]: E0407 12:58:09.416715    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.746456 1223988 logs.go:138] Found kubelet problem: Apr 07 12:58:11 old-k8s-version-907855 kubelet[1490]: E0407 12:58:11.417008    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.746675 1223988 logs.go:138] Found kubelet problem: Apr 07 12:58:23 old-k8s-version-907855 kubelet[1490]: E0407 12:58:23.416985    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.748961 1223988 logs.go:138] Found kubelet problem: Apr 07 12:58:25 old-k8s-version-907855 kubelet[1490]: E0407 12:58:25.861509    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0407 13:01:54.749173 1223988 logs.go:138] Found kubelet problem: Apr 07 12:58:36 old-k8s-version-907855 kubelet[1490]: E0407 12:58:36.417146    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.749396 1223988 logs.go:138] Found kubelet problem: Apr 07 12:58:38 old-k8s-version-907855 kubelet[1490]: E0407 12:58:38.422135    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.749605 1223988 logs.go:138] Found kubelet problem: Apr 07 12:58:48 old-k8s-version-907855 kubelet[1490]: E0407 12:58:48.427432    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.749828 1223988 logs.go:138] Found kubelet problem: Apr 07 12:58:52 old-k8s-version-907855 kubelet[1490]: E0407 12:58:52.439868    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.750038 1223988 logs.go:138] Found kubelet problem: Apr 07 12:58:59 old-k8s-version-907855 kubelet[1490]: E0407 12:58:59.417124    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.750261 1223988 logs.go:138] Found kubelet problem: Apr 07 12:59:05 old-k8s-version-907855 kubelet[1490]: E0407 12:59:05.417150    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.750493 1223988 logs.go:138] Found kubelet problem: Apr 07 12:59:13 old-k8s-version-907855 kubelet[1490]: E0407 12:59:13.417339    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.750719 1223988 logs.go:138] Found kubelet problem: Apr 07 12:59:19 old-k8s-version-907855 kubelet[1490]: E0407 12:59:19.417596    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.752888 1223988 logs.go:138] Found kubelet problem: Apr 07 12:59:27 old-k8s-version-907855 kubelet[1490]: E0407 12:59:27.433153    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0407 13:01:54.753121 1223988 logs.go:138] Found kubelet problem: Apr 07 12:59:30 old-k8s-version-907855 kubelet[1490]: E0407 12:59:30.417236    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.753331 1223988 logs.go:138] Found kubelet problem: Apr 07 12:59:42 old-k8s-version-907855 kubelet[1490]: E0407 12:59:42.417377    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.753583 1223988 logs.go:138] Found kubelet problem: Apr 07 12:59:45 old-k8s-version-907855 kubelet[1490]: E0407 12:59:45.416991    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.753812 1223988 logs.go:138] Found kubelet problem: Apr 07 12:59:53 old-k8s-version-907855 kubelet[1490]: E0407 12:59:53.417446    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.756105 1223988 logs.go:138] Found kubelet problem: Apr 07 12:59:56 old-k8s-version-907855 kubelet[1490]: E0407 12:59:56.956582    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0407 13:01:54.756342 1223988 logs.go:138] Found kubelet problem: Apr 07 13:00:06 old-k8s-version-907855 kubelet[1490]: E0407 13:00:06.417090    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.756608 1223988 logs.go:138] Found kubelet problem: Apr 07 13:00:12 old-k8s-version-907855 kubelet[1490]: E0407 13:00:12.417499    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.756835 1223988 logs.go:138] Found kubelet problem: Apr 07 13:00:21 old-k8s-version-907855 kubelet[1490]: E0407 13:00:21.417196    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.757059 1223988 logs.go:138] Found kubelet problem: Apr 07 13:00:24 old-k8s-version-907855 kubelet[1490]: E0407 13:00:24.417156    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.757268 1223988 logs.go:138] Found kubelet problem: Apr 07 13:00:35 old-k8s-version-907855 kubelet[1490]: E0407 13:00:35.417216    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.757496 1223988 logs.go:138] Found kubelet problem: Apr 07 13:00:39 old-k8s-version-907855 kubelet[1490]: E0407 13:00:39.417204    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.757716 1223988 logs.go:138] Found kubelet problem: Apr 07 13:00:46 old-k8s-version-907855 kubelet[1490]: E0407 13:00:46.418573    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.757940 1223988 logs.go:138] Found kubelet problem: Apr 07 13:00:50 old-k8s-version-907855 kubelet[1490]: E0407 13:00:50.416979    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.758150 1223988 logs.go:138] Found kubelet problem: Apr 07 13:01:01 old-k8s-version-907855 kubelet[1490]: E0407 13:01:01.417156    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.758371 1223988 logs.go:138] Found kubelet problem: Apr 07 13:01:03 old-k8s-version-907855 kubelet[1490]: E0407 13:01:03.423671    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.758581 1223988 logs.go:138] Found kubelet problem: Apr 07 13:01:14 old-k8s-version-907855 kubelet[1490]: E0407 13:01:14.418839    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.758850 1223988 logs.go:138] Found kubelet problem: Apr 07 13:01:17 old-k8s-version-907855 kubelet[1490]: E0407 13:01:17.417074    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.759063 1223988 logs.go:138] Found kubelet problem: Apr 07 13:01:28 old-k8s-version-907855 kubelet[1490]: E0407 13:01:28.417882    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.759286 1223988 logs.go:138] Found kubelet problem: Apr 07 13:01:28 old-k8s-version-907855 kubelet[1490]: E0407 13:01:28.418181    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.759495 1223988 logs.go:138] Found kubelet problem: Apr 07 13:01:41 old-k8s-version-907855 kubelet[1490]: E0407 13:01:41.417193    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.759723 1223988 logs.go:138] Found kubelet problem: Apr 07 13:01:42 old-k8s-version-907855 kubelet[1490]: E0407 13:01:42.418831    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.759934 1223988 logs.go:138] Found kubelet problem: Apr 07 13:01:53 old-k8s-version-907855 kubelet[1490]: E0407 13:01:53.423295    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0407 13:01:54.759964 1223988 out.go:358] Setting ErrFile to fd 2...
	I0407 13:01:54.759987 1223988 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0407 13:01:54.760077 1223988 out.go:270] X Problems detected in kubelet:
	W0407 13:01:54.760117 1223988 out.go:270]   Apr 07 13:01:28 old-k8s-version-907855 kubelet[1490]: E0407 13:01:28.417882    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.760149 1223988 out.go:270]   Apr 07 13:01:28 old-k8s-version-907855 kubelet[1490]: E0407 13:01:28.418181    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.760199 1223988 out.go:270]   Apr 07 13:01:41 old-k8s-version-907855 kubelet[1490]: E0407 13:01:41.417193    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.760230 1223988 out.go:270]   Apr 07 13:01:42 old-k8s-version-907855 kubelet[1490]: E0407 13:01:42.418831    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:01:54.760262 1223988 out.go:270]   Apr 07 13:01:53 old-k8s-version-907855 kubelet[1490]: E0407 13:01:53.423295    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0407 13:01:54.760319 1223988 out.go:358] Setting ErrFile to fd 2...
	I0407 13:01:54.760337 1223988 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:01:57.967245 1238390 kubeadm.go:310] [api-check] The API server is healthy after 7.001655985s
	I0407 13:01:58.003462 1238390 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0407 13:01:58.023910 1238390 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0407 13:01:58.054827 1238390 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0407 13:01:58.055034 1238390 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-717935 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0407 13:01:58.071865 1238390 kubeadm.go:310] [bootstrap-token] Using token: 8ffbfk.4icwlyf6perxs9l8
	I0407 13:01:58.074858 1238390 out.go:235]   - Configuring RBAC rules ...
	I0407 13:01:58.074992 1238390 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0407 13:01:58.081655 1238390 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0407 13:01:58.090372 1238390 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0407 13:01:58.095796 1238390 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0407 13:01:58.102858 1238390 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0407 13:01:58.107379 1238390 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0407 13:01:58.381305 1238390 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0407 13:01:58.820210 1238390 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0407 13:01:59.382458 1238390 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0407 13:01:59.383811 1238390 kubeadm.go:310] 
	I0407 13:01:59.383900 1238390 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0407 13:01:59.383913 1238390 kubeadm.go:310] 
	I0407 13:01:59.383992 1238390 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0407 13:01:59.384006 1238390 kubeadm.go:310] 
	I0407 13:01:59.384033 1238390 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0407 13:01:59.384097 1238390 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0407 13:01:59.384175 1238390 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0407 13:01:59.384185 1238390 kubeadm.go:310] 
	I0407 13:01:59.384239 1238390 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0407 13:01:59.384248 1238390 kubeadm.go:310] 
	I0407 13:01:59.384296 1238390 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0407 13:01:59.384313 1238390 kubeadm.go:310] 
	I0407 13:01:59.384374 1238390 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0407 13:01:59.384454 1238390 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0407 13:01:59.384526 1238390 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0407 13:01:59.384535 1238390 kubeadm.go:310] 
	I0407 13:01:59.384620 1238390 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0407 13:01:59.384705 1238390 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0407 13:01:59.384713 1238390 kubeadm.go:310] 
	I0407 13:01:59.384833 1238390 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 8ffbfk.4icwlyf6perxs9l8 \
	I0407 13:01:59.384942 1238390 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e01aa0b2139562e6d7564e36bd9b30623276690658e7d6c3452f39fde5b54831 \
	I0407 13:01:59.384967 1238390 kubeadm.go:310] 	--control-plane 
	I0407 13:01:59.384976 1238390 kubeadm.go:310] 
	I0407 13:01:59.385060 1238390 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0407 13:01:59.385068 1238390 kubeadm.go:310] 
	I0407 13:01:59.385150 1238390 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8ffbfk.4icwlyf6perxs9l8 \
	I0407 13:01:59.385255 1238390 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e01aa0b2139562e6d7564e36bd9b30623276690658e7d6c3452f39fde5b54831 
	I0407 13:01:59.389104 1238390 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0407 13:01:59.389329 1238390 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1081-aws\n", err: exit status 1
	I0407 13:01:59.389453 1238390 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0407 13:01:59.389470 1238390 cni.go:84] Creating CNI manager for ""
	I0407 13:01:59.389484 1238390 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0407 13:01:59.392668 1238390 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0407 13:01:59.395527 1238390 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0407 13:01:59.410347 1238390 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0407 13:01:59.432586 1238390 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0407 13:01:59.432723 1238390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:01:59.432843 1238390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-717935 minikube.k8s.io/updated_at=2025_04_07T13_01_59_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=33e6edc58d2014d70e908473920ef4ac8eae1e43 minikube.k8s.io/name=embed-certs-717935 minikube.k8s.io/primary=true
	I0407 13:01:59.626161 1238390 ops.go:34] apiserver oom_adj: -16
	I0407 13:01:59.626349 1238390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:02:00.127017 1238390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:02:00.626856 1238390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:02:01.126591 1238390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:02:01.626509 1238390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:02:02.126894 1238390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:02:02.626452 1238390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:02:03.126393 1238390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:02:03.627067 1238390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:02:04.127094 1238390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:02:04.242197 1238390 kubeadm.go:1113] duration metric: took 4.809531813s to wait for elevateKubeSystemPrivileges
	I0407 13:02:04.242226 1238390 kubeadm.go:394] duration metric: took 19.995681221s to StartCluster
	I0407 13:02:04.242244 1238390 settings.go:142] acquiring lock: {Name:mkfee10638cabaeb5ccab0f7580cab520f4414b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:02:04.242307 1238390 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20602-902080/kubeconfig
	I0407 13:02:04.243638 1238390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-902080/kubeconfig: {Name:mk5348bdf0fa2a5d213e4c9bed1510a349ce9529 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:02:04.243867 1238390 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0407 13:02:04.244028 1238390 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0407 13:02:04.244266 1238390 config.go:182] Loaded profile config "embed-certs-717935": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 13:02:04.244422 1238390 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0407 13:02:04.244482 1238390 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-717935"
	I0407 13:02:04.244504 1238390 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-717935"
	I0407 13:02:04.244526 1238390 host.go:66] Checking if "embed-certs-717935" exists ...
	I0407 13:02:04.245158 1238390 addons.go:69] Setting default-storageclass=true in profile "embed-certs-717935"
	I0407 13:02:04.245177 1238390 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-717935"
	I0407 13:02:04.245467 1238390 cli_runner.go:164] Run: docker container inspect embed-certs-717935 --format={{.State.Status}}
	I0407 13:02:04.245838 1238390 cli_runner.go:164] Run: docker container inspect embed-certs-717935 --format={{.State.Status}}
	I0407 13:02:04.248153 1238390 out.go:177] * Verifying Kubernetes components...
	I0407 13:02:04.252563 1238390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:02:04.277081 1238390 addons.go:238] Setting addon default-storageclass=true in "embed-certs-717935"
	I0407 13:02:04.277122 1238390 host.go:66] Checking if "embed-certs-717935" exists ...
	I0407 13:02:04.277687 1238390 cli_runner.go:164] Run: docker container inspect embed-certs-717935 --format={{.State.Status}}
	I0407 13:02:04.323594 1238390 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 13:02:04.762000 1223988 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:02:04.778328 1223988 api_server.go:72] duration metric: took 5m49.458475569s to wait for apiserver process to appear ...
	I0407 13:02:04.778357 1223988 api_server.go:88] waiting for apiserver healthz status ...
	I0407 13:02:04.778437 1223988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0407 13:02:04.821300 1223988 logs.go:282] 2 containers: [8e9ca3cf686f 002b3321c8c9]
	I0407 13:02:04.821386 1223988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0407 13:02:04.857002 1223988 logs.go:282] 2 containers: [499bde040d37 76fcb451fd44]
	I0407 13:02:04.857092 1223988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0407 13:02:04.894956 1223988 logs.go:282] 2 containers: [e6a43a71b1f6 73c94e36d8a2]
	I0407 13:02:04.895044 1223988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0407 13:02:04.919064 1223988 logs.go:282] 2 containers: [0f335eb94cad 9715ae775fae]
	I0407 13:02:04.919149 1223988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0407 13:02:04.952111 1223988 logs.go:282] 2 containers: [02c99fe2d89e 308161cfd111]
	I0407 13:02:04.952200 1223988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0407 13:02:05.001073 1223988 logs.go:282] 2 containers: [5abec15abc05 3652c993a04e]
	I0407 13:02:05.001172 1223988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0407 13:02:05.040585 1223988 logs.go:282] 0 containers: []
	W0407 13:02:05.040630 1223988 logs.go:284] No container was found matching "kindnet"
	I0407 13:02:05.040711 1223988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0407 13:02:05.069987 1223988 logs.go:282] 1 containers: [f4034a5c5e25]
	I0407 13:02:05.070085 1223988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0407 13:02:05.107487 1223988 logs.go:282] 2 containers: [71f6bbb99341 49a236bde2cb]
	I0407 13:02:05.107534 1223988 logs.go:123] Gathering logs for kube-proxy [02c99fe2d89e] ...
	I0407 13:02:05.107546 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02c99fe2d89e"
	I0407 13:02:05.150377 1223988 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:02:05.150406 1223988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0407 13:02:05.432358 1223988 logs.go:123] Gathering logs for kube-apiserver [8e9ca3cf686f] ...
	I0407 13:02:05.432390 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e9ca3cf686f"
	I0407 13:02:05.519522 1223988 logs.go:123] Gathering logs for etcd [499bde040d37] ...
	I0407 13:02:05.519563 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 499bde040d37"
	I0407 13:02:05.584535 1223988 logs.go:123] Gathering logs for etcd [76fcb451fd44] ...
	I0407 13:02:05.584570 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76fcb451fd44"
	I0407 13:02:05.629466 1223988 logs.go:123] Gathering logs for coredns [e6a43a71b1f6] ...
	I0407 13:02:05.629499 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6a43a71b1f6"
	I0407 13:02:05.659764 1223988 logs.go:123] Gathering logs for coredns [73c94e36d8a2] ...
	I0407 13:02:05.659798 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73c94e36d8a2"
	I0407 13:02:05.698989 1223988 logs.go:123] Gathering logs for kube-scheduler [0f335eb94cad] ...
	I0407 13:02:05.699018 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f335eb94cad"
	I0407 13:02:05.731708 1223988 logs.go:123] Gathering logs for kube-apiserver [002b3321c8c9] ...
	I0407 13:02:05.731778 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 002b3321c8c9"
	I0407 13:02:05.878737 1223988 logs.go:123] Gathering logs for kube-scheduler [9715ae775fae] ...
	I0407 13:02:05.878813 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9715ae775fae"
	I0407 13:02:05.909034 1223988 logs.go:123] Gathering logs for kube-controller-manager [3652c993a04e] ...
	I0407 13:02:05.909105 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3652c993a04e"
	I0407 13:02:05.964339 1223988 logs.go:123] Gathering logs for kubernetes-dashboard [f4034a5c5e25] ...
	I0407 13:02:05.964418 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4034a5c5e25"
	I0407 13:02:05.994884 1223988 logs.go:123] Gathering logs for Docker ...
	I0407 13:02:05.994956 1223988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0407 13:02:04.330191 1238390 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 13:02:04.330214 1238390 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0407 13:02:04.330279 1238390 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-717935
	I0407 13:02:04.346925 1238390 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0407 13:02:04.346946 1238390 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0407 13:02:04.347008 1238390 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-717935
	I0407 13:02:04.384924 1238390 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34201 SSHKeyPath:/home/jenkins/minikube-integration/20602-902080/.minikube/machines/embed-certs-717935/id_rsa Username:docker}
	I0407 13:02:04.390616 1238390 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34201 SSHKeyPath:/home/jenkins/minikube-integration/20602-902080/.minikube/machines/embed-certs-717935/id_rsa Username:docker}
	I0407 13:02:04.726598 1238390 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0407 13:02:04.726714 1238390 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 13:02:04.740885 1238390 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 13:02:04.742090 1238390 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0407 13:02:05.997116 1238390 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.270375585s)
	I0407 13:02:05.997134 1238390 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.270501716s)
	I0407 13:02:05.997154 1238390 start.go:971] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0407 13:02:05.998219 1238390 node_ready.go:35] waiting up to 6m0s for node "embed-certs-717935" to be "Ready" ...
	I0407 13:02:05.998337 1238390 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.256228023s)
	I0407 13:02:06.060639 1238390 node_ready.go:49] node "embed-certs-717935" has status "Ready":"True"
	I0407 13:02:06.060733 1238390 node_ready.go:38] duration metric: took 62.491844ms for node "embed-certs-717935" to be "Ready" ...
	I0407 13:02:06.060759 1238390 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 13:02:06.080377 1238390 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-cs6xd" in "kube-system" namespace to be "Ready" ...
	I0407 13:02:06.520282 1238390 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-717935" context rescaled to 1 replicas
	I0407 13:02:06.587945 1238390 pod_ready.go:93] pod "coredns-668d6bf9bc-cs6xd" in "kube-system" namespace has status "Ready":"True"
	I0407 13:02:06.587972 1238390 pod_ready.go:82] duration metric: took 507.56238ms for pod "coredns-668d6bf9bc-cs6xd" in "kube-system" namespace to be "Ready" ...
	I0407 13:02:06.587985 1238390 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-qdkmp" in "kube-system" namespace to be "Ready" ...
	I0407 13:02:06.598378 1238390 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.857407551s)
	I0407 13:02:06.601479 1238390 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0407 13:02:06.062540 1223988 logs.go:123] Gathering logs for container status ...
	I0407 13:02:06.062608 1223988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:02:06.188493 1223988 logs.go:123] Gathering logs for dmesg ...
	I0407 13:02:06.188612 1223988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:02:06.216307 1223988 logs.go:123] Gathering logs for kubelet ...
	I0407 13:02:06.216376 1223988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0407 13:02:06.294580 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:27 old-k8s-version-907855 kubelet[1490]: E0407 12:56:27.364375    1490 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-907855" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-907855' and this object
	W0407 13:02:06.294840 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:27 old-k8s-version-907855 kubelet[1490]: E0407 12:56:27.364549    1490 reflector.go:138] object-"kube-system"/"kube-proxy-token-vfglr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-vfglr" is forbidden: User "system:node:old-k8s-version-907855" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-907855' and this object
	W0407 13:02:06.295051 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:27 old-k8s-version-907855 kubelet[1490]: E0407 12:56:27.364618    1490 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-907855" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-907855' and this object
	W0407 13:02:06.295275 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:27 old-k8s-version-907855 kubelet[1490]: E0407 12:56:27.364679    1490 reflector.go:138] object-"kube-system"/"metrics-server-token-gjsq5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-gjsq5" is forbidden: User "system:node:old-k8s-version-907855" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-907855' and this object
	W0407 13:02:06.295503 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:27 old-k8s-version-907855 kubelet[1490]: E0407 12:56:27.364737    1490 reflector.go:138] object-"kube-system"/"storage-provisioner-token-z49vs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-z49vs" is forbidden: User "system:node:old-k8s-version-907855" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-907855' and this object
	W0407 13:02:06.296700 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:27 old-k8s-version-907855 kubelet[1490]: E0407 12:56:27.371448    1490 reflector.go:138] object-"kube-system"/"coredns-token-nwrv9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-nwrv9" is forbidden: User "system:node:old-k8s-version-907855" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-907855' and this object
	W0407 13:02:06.296995 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:27 old-k8s-version-907855 kubelet[1490]: E0407 12:56:27.371503    1490 reflector.go:138] object-"default"/"default-token-ld75m": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-ld75m" is forbidden: User "system:node:old-k8s-version-907855" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-907855' and this object
	W0407 13:02:06.303834 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:30 old-k8s-version-907855 kubelet[1490]: E0407 12:56:30.148275    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0407 13:02:06.304821 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:30 old-k8s-version-907855 kubelet[1490]: E0407 12:56:30.608856    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.305574 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:31 old-k8s-version-907855 kubelet[1490]: E0407 12:56:31.653412    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.308360 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:44 old-k8s-version-907855 kubelet[1490]: E0407 12:56:44.480129    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0407 13:02:06.308907 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:48 old-k8s-version-907855 kubelet[1490]: E0407 12:56:48.210975    1490 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-6t88s": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-6t88s" is forbidden: User "system:node:old-k8s-version-907855" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-907855' and this object
	W0407 13:02:06.313970 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:56 old-k8s-version-907855 kubelet[1490]: E0407 12:56:56.515766    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0407 13:02:06.314363 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:57 old-k8s-version-907855 kubelet[1490]: E0407 12:56:57.209281    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.314553 1223988 logs.go:138] Found kubelet problem: Apr 07 12:56:57 old-k8s-version-907855 kubelet[1490]: E0407 12:56:57.418876    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.315328 1223988 logs.go:138] Found kubelet problem: Apr 07 12:57:01 old-k8s-version-907855 kubelet[1490]: E0407 12:57:01.269861    1490 pod_workers.go:191] Error syncing pod dc3ff993-a34e-429d-8975-38688893221d ("storage-provisioner_kube-system(dc3ff993-a34e-429d-8975-38688893221d)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(dc3ff993-a34e-429d-8975-38688893221d)"
	W0407 13:02:06.317500 1223988 logs.go:138] Found kubelet problem: Apr 07 12:57:08 old-k8s-version-907855 kubelet[1490]: E0407 12:57:08.473737    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0407 13:02:06.320290 1223988 logs.go:138] Found kubelet problem: Apr 07 12:57:09 old-k8s-version-907855 kubelet[1490]: E0407 12:57:09.895279    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0407 13:02:06.320661 1223988 logs.go:138] Found kubelet problem: Apr 07 12:57:19 old-k8s-version-907855 kubelet[1490]: E0407 12:57:19.417047    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.320876 1223988 logs.go:138] Found kubelet problem: Apr 07 12:57:20 old-k8s-version-907855 kubelet[1490]: E0407 12:57:20.425017    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.323182 1223988 logs.go:138] Found kubelet problem: Apr 07 12:57:32 old-k8s-version-907855 kubelet[1490]: E0407 12:57:32.874830    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0407 13:02:06.323372 1223988 logs.go:138] Found kubelet problem: Apr 07 12:57:34 old-k8s-version-907855 kubelet[1490]: E0407 12:57:34.417561    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.323559 1223988 logs.go:138] Found kubelet problem: Apr 07 12:57:45 old-k8s-version-907855 kubelet[1490]: E0407 12:57:45.417155    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.323792 1223988 logs.go:138] Found kubelet problem: Apr 07 12:57:48 old-k8s-version-907855 kubelet[1490]: E0407 12:57:48.430562    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.325868 1223988 logs.go:138] Found kubelet problem: Apr 07 12:57:57 old-k8s-version-907855 kubelet[1490]: E0407 12:57:57.442723    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0407 13:02:06.326067 1223988 logs.go:138] Found kubelet problem: Apr 07 12:58:00 old-k8s-version-907855 kubelet[1490]: E0407 12:58:00.417030    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.326253 1223988 logs.go:138] Found kubelet problem: Apr 07 12:58:09 old-k8s-version-907855 kubelet[1490]: E0407 12:58:09.416715    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.326472 1223988 logs.go:138] Found kubelet problem: Apr 07 12:58:11 old-k8s-version-907855 kubelet[1490]: E0407 12:58:11.417008    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.326674 1223988 logs.go:138] Found kubelet problem: Apr 07 12:58:23 old-k8s-version-907855 kubelet[1490]: E0407 12:58:23.416985    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.328950 1223988 logs.go:138] Found kubelet problem: Apr 07 12:58:25 old-k8s-version-907855 kubelet[1490]: E0407 12:58:25.861509    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0407 13:02:06.329139 1223988 logs.go:138] Found kubelet problem: Apr 07 12:58:36 old-k8s-version-907855 kubelet[1490]: E0407 12:58:36.417146    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.329337 1223988 logs.go:138] Found kubelet problem: Apr 07 12:58:38 old-k8s-version-907855 kubelet[1490]: E0407 12:58:38.422135    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.329522 1223988 logs.go:138] Found kubelet problem: Apr 07 12:58:48 old-k8s-version-907855 kubelet[1490]: E0407 12:58:48.427432    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.329721 1223988 logs.go:138] Found kubelet problem: Apr 07 12:58:52 old-k8s-version-907855 kubelet[1490]: E0407 12:58:52.439868    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.329934 1223988 logs.go:138] Found kubelet problem: Apr 07 12:58:59 old-k8s-version-907855 kubelet[1490]: E0407 12:58:59.417124    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.330135 1223988 logs.go:138] Found kubelet problem: Apr 07 12:59:05 old-k8s-version-907855 kubelet[1490]: E0407 12:59:05.417150    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.330319 1223988 logs.go:138] Found kubelet problem: Apr 07 12:59:13 old-k8s-version-907855 kubelet[1490]: E0407 12:59:13.417339    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.330546 1223988 logs.go:138] Found kubelet problem: Apr 07 12:59:19 old-k8s-version-907855 kubelet[1490]: E0407 12:59:19.417596    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.332724 1223988 logs.go:138] Found kubelet problem: Apr 07 12:59:27 old-k8s-version-907855 kubelet[1490]: E0407 12:59:27.433153    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0407 13:02:06.332935 1223988 logs.go:138] Found kubelet problem: Apr 07 12:59:30 old-k8s-version-907855 kubelet[1490]: E0407 12:59:30.417236    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.333123 1223988 logs.go:138] Found kubelet problem: Apr 07 12:59:42 old-k8s-version-907855 kubelet[1490]: E0407 12:59:42.417377    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.333322 1223988 logs.go:138] Found kubelet problem: Apr 07 12:59:45 old-k8s-version-907855 kubelet[1490]: E0407 12:59:45.416991    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.333508 1223988 logs.go:138] Found kubelet problem: Apr 07 12:59:53 old-k8s-version-907855 kubelet[1490]: E0407 12:59:53.417446    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.335771 1223988 logs.go:138] Found kubelet problem: Apr 07 12:59:56 old-k8s-version-907855 kubelet[1490]: E0407 12:59:56.956582    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0407 13:02:06.335959 1223988 logs.go:138] Found kubelet problem: Apr 07 13:00:06 old-k8s-version-907855 kubelet[1490]: E0407 13:00:06.417090    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.336156 1223988 logs.go:138] Found kubelet problem: Apr 07 13:00:12 old-k8s-version-907855 kubelet[1490]: E0407 13:00:12.417499    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.336340 1223988 logs.go:138] Found kubelet problem: Apr 07 13:00:21 old-k8s-version-907855 kubelet[1490]: E0407 13:00:21.417196    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.336571 1223988 logs.go:138] Found kubelet problem: Apr 07 13:00:24 old-k8s-version-907855 kubelet[1490]: E0407 13:00:24.417156    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.336809 1223988 logs.go:138] Found kubelet problem: Apr 07 13:00:35 old-k8s-version-907855 kubelet[1490]: E0407 13:00:35.417216    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.337012 1223988 logs.go:138] Found kubelet problem: Apr 07 13:00:39 old-k8s-version-907855 kubelet[1490]: E0407 13:00:39.417204    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.337199 1223988 logs.go:138] Found kubelet problem: Apr 07 13:00:46 old-k8s-version-907855 kubelet[1490]: E0407 13:00:46.418573    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.337396 1223988 logs.go:138] Found kubelet problem: Apr 07 13:00:50 old-k8s-version-907855 kubelet[1490]: E0407 13:00:50.416979    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.337580 1223988 logs.go:138] Found kubelet problem: Apr 07 13:01:01 old-k8s-version-907855 kubelet[1490]: E0407 13:01:01.417156    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.337776 1223988 logs.go:138] Found kubelet problem: Apr 07 13:01:03 old-k8s-version-907855 kubelet[1490]: E0407 13:01:03.423671    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.337961 1223988 logs.go:138] Found kubelet problem: Apr 07 13:01:14 old-k8s-version-907855 kubelet[1490]: E0407 13:01:14.418839    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.338173 1223988 logs.go:138] Found kubelet problem: Apr 07 13:01:17 old-k8s-version-907855 kubelet[1490]: E0407 13:01:17.417074    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.338375 1223988 logs.go:138] Found kubelet problem: Apr 07 13:01:28 old-k8s-version-907855 kubelet[1490]: E0407 13:01:28.417882    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.338573 1223988 logs.go:138] Found kubelet problem: Apr 07 13:01:28 old-k8s-version-907855 kubelet[1490]: E0407 13:01:28.418181    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.338764 1223988 logs.go:138] Found kubelet problem: Apr 07 13:01:41 old-k8s-version-907855 kubelet[1490]: E0407 13:01:41.417193    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.338962 1223988 logs.go:138] Found kubelet problem: Apr 07 13:01:42 old-k8s-version-907855 kubelet[1490]: E0407 13:01:42.418831    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.339172 1223988 logs.go:138] Found kubelet problem: Apr 07 13:01:53 old-k8s-version-907855 kubelet[1490]: E0407 13:01:53.423295    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.339373 1223988 logs.go:138] Found kubelet problem: Apr 07 13:01:57 old-k8s-version-907855 kubelet[1490]: E0407 13:01:57.417250    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.339557 1223988 logs.go:138] Found kubelet problem: Apr 07 13:02:05 old-k8s-version-907855 kubelet[1490]: E0407 13:02:05.421584    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0407 13:02:06.339569 1223988 logs.go:123] Gathering logs for kube-proxy [308161cfd111] ...
	I0407 13:02:06.339584 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 308161cfd111"
	I0407 13:02:06.377827 1223988 logs.go:123] Gathering logs for kube-controller-manager [5abec15abc05] ...
	I0407 13:02:06.377853 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5abec15abc05"
	I0407 13:02:06.460822 1223988 logs.go:123] Gathering logs for storage-provisioner [71f6bbb99341] ...
	I0407 13:02:06.463251 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71f6bbb99341"
	I0407 13:02:06.503868 1223988 logs.go:123] Gathering logs for storage-provisioner [49a236bde2cb] ...
	I0407 13:02:06.503940 1223988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a236bde2cb"
	I0407 13:02:06.528210 1223988 out.go:358] Setting ErrFile to fd 2...
	I0407 13:02:06.528231 1223988 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0407 13:02:06.528274 1223988 out.go:270] X Problems detected in kubelet:
	W0407 13:02:06.528285 1223988 out.go:270]   Apr 07 13:01:41 old-k8s-version-907855 kubelet[1490]: E0407 13:01:41.417193    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.528296 1223988 out.go:270]   Apr 07 13:01:42 old-k8s-version-907855 kubelet[1490]: E0407 13:01:42.418831    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.528304 1223988 out.go:270]   Apr 07 13:01:53 old-k8s-version-907855 kubelet[1490]: E0407 13:01:53.423295    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.528309 1223988 out.go:270]   Apr 07 13:01:57 old-k8s-version-907855 kubelet[1490]: E0407 13:01:57.417250    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:02:06.528314 1223988 out.go:270]   Apr 07 13:02:05 old-k8s-version-907855 kubelet[1490]: E0407 13:02:05.421584    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0407 13:02:06.528324 1223988 out.go:358] Setting ErrFile to fd 2...
	I0407 13:02:06.528330 1223988 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:02:06.604461 1238390 addons.go:514] duration metric: took 2.360027195s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0407 13:02:08.592914 1238390 pod_ready.go:103] pod "coredns-668d6bf9bc-qdkmp" in "kube-system" namespace has status "Ready":"False"
	I0407 13:02:10.593533 1238390 pod_ready.go:103] pod "coredns-668d6bf9bc-qdkmp" in "kube-system" namespace has status "Ready":"False"
	I0407 13:02:12.593639 1238390 pod_ready.go:93] pod "coredns-668d6bf9bc-qdkmp" in "kube-system" namespace has status "Ready":"True"
	I0407 13:02:12.593741 1238390 pod_ready.go:82] duration metric: took 6.005746392s for pod "coredns-668d6bf9bc-qdkmp" in "kube-system" namespace to be "Ready" ...
	I0407 13:02:12.593792 1238390 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-717935" in "kube-system" namespace to be "Ready" ...
	I0407 13:02:12.603261 1238390 pod_ready.go:93] pod "etcd-embed-certs-717935" in "kube-system" namespace has status "Ready":"True"
	I0407 13:02:12.603286 1238390 pod_ready.go:82] duration metric: took 9.471ms for pod "etcd-embed-certs-717935" in "kube-system" namespace to be "Ready" ...
	I0407 13:02:12.603299 1238390 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-717935" in "kube-system" namespace to be "Ready" ...
	I0407 13:02:12.608259 1238390 pod_ready.go:93] pod "kube-apiserver-embed-certs-717935" in "kube-system" namespace has status "Ready":"True"
	I0407 13:02:12.608287 1238390 pod_ready.go:82] duration metric: took 4.979351ms for pod "kube-apiserver-embed-certs-717935" in "kube-system" namespace to be "Ready" ...
	I0407 13:02:12.608299 1238390 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-717935" in "kube-system" namespace to be "Ready" ...
	I0407 13:02:12.612645 1238390 pod_ready.go:93] pod "kube-controller-manager-embed-certs-717935" in "kube-system" namespace has status "Ready":"True"
	I0407 13:02:12.612680 1238390 pod_ready.go:82] duration metric: took 4.372156ms for pod "kube-controller-manager-embed-certs-717935" in "kube-system" namespace to be "Ready" ...
	I0407 13:02:12.612694 1238390 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g8kpd" in "kube-system" namespace to be "Ready" ...
	I0407 13:02:12.617531 1238390 pod_ready.go:93] pod "kube-proxy-g8kpd" in "kube-system" namespace has status "Ready":"True"
	I0407 13:02:12.617598 1238390 pod_ready.go:82] duration metric: took 4.881767ms for pod "kube-proxy-g8kpd" in "kube-system" namespace to be "Ready" ...
	I0407 13:02:12.617626 1238390 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-717935" in "kube-system" namespace to be "Ready" ...
	I0407 13:02:12.991574 1238390 pod_ready.go:93] pod "kube-scheduler-embed-certs-717935" in "kube-system" namespace has status "Ready":"True"
	I0407 13:02:12.991597 1238390 pod_ready.go:82] duration metric: took 373.949573ms for pod "kube-scheduler-embed-certs-717935" in "kube-system" namespace to be "Ready" ...
	I0407 13:02:12.991606 1238390 pod_ready.go:39] duration metric: took 6.930784507s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 13:02:12.991625 1238390 api_server.go:52] waiting for apiserver process to appear ...
	I0407 13:02:12.991687 1238390 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:02:13.004244 1238390 api_server.go:72] duration metric: took 8.760331519s to wait for apiserver process to appear ...
	I0407 13:02:13.004343 1238390 api_server.go:88] waiting for apiserver healthz status ...
	I0407 13:02:13.004382 1238390 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0407 13:02:13.013885 1238390 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0407 13:02:13.015106 1238390 api_server.go:141] control plane version: v1.32.2
	I0407 13:02:13.015135 1238390 api_server.go:131] duration metric: took 10.772488ms to wait for apiserver health ...
	I0407 13:02:13.015144 1238390 system_pods.go:43] waiting for kube-system pods to appear ...
	I0407 13:02:13.192079 1238390 system_pods.go:59] 8 kube-system pods found
	I0407 13:02:13.192117 1238390 system_pods.go:61] "coredns-668d6bf9bc-cs6xd" [321867e8-4417-46a5-84f7-453adc4a0c72] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	I0407 13:02:13.192124 1238390 system_pods.go:61] "coredns-668d6bf9bc-qdkmp" [3e115b21-622d-4d8b-a362-82a1925d4684] Running
	I0407 13:02:13.192128 1238390 system_pods.go:61] "etcd-embed-certs-717935" [373c2c5d-0269-4ba0-b2dc-5c326b82c02c] Running
	I0407 13:02:13.192132 1238390 system_pods.go:61] "kube-apiserver-embed-certs-717935" [17de9321-4a0f-40fc-acc5-f8db227ed8c7] Running
	I0407 13:02:13.192137 1238390 system_pods.go:61] "kube-controller-manager-embed-certs-717935" [dc66e762-52f9-4a4c-8170-30cf48b7bc95] Running
	I0407 13:02:13.192140 1238390 system_pods.go:61] "kube-proxy-g8kpd" [f7bac5a9-1b6e-406d-8cfa-f4a459aa6bb7] Running
	I0407 13:02:13.192144 1238390 system_pods.go:61] "kube-scheduler-embed-certs-717935" [06c059d7-d1a1-4980-bd5c-6098a47a1dff] Running
	I0407 13:02:13.192148 1238390 system_pods.go:61] "storage-provisioner" [3ba6ab67-8297-4e38-8269-c509f5ac1124] Running
	I0407 13:02:13.192155 1238390 system_pods.go:74] duration metric: took 177.003586ms to wait for pod list to return data ...
	I0407 13:02:13.192166 1238390 default_sa.go:34] waiting for default service account to be created ...
	I0407 13:02:13.391530 1238390 default_sa.go:45] found service account: "default"
	I0407 13:02:13.391553 1238390 default_sa.go:55] duration metric: took 199.377046ms for default service account to be created ...
	I0407 13:02:13.391565 1238390 system_pods.go:116] waiting for k8s-apps to be running ...
	I0407 13:02:13.592174 1238390 system_pods.go:86] 7 kube-system pods found
	I0407 13:02:13.592205 1238390 system_pods.go:89] "coredns-668d6bf9bc-qdkmp" [3e115b21-622d-4d8b-a362-82a1925d4684] Running
	I0407 13:02:13.592212 1238390 system_pods.go:89] "etcd-embed-certs-717935" [373c2c5d-0269-4ba0-b2dc-5c326b82c02c] Running
	I0407 13:02:13.592219 1238390 system_pods.go:89] "kube-apiserver-embed-certs-717935" [17de9321-4a0f-40fc-acc5-f8db227ed8c7] Running
	I0407 13:02:13.592223 1238390 system_pods.go:89] "kube-controller-manager-embed-certs-717935" [dc66e762-52f9-4a4c-8170-30cf48b7bc95] Running
	I0407 13:02:13.592227 1238390 system_pods.go:89] "kube-proxy-g8kpd" [f7bac5a9-1b6e-406d-8cfa-f4a459aa6bb7] Running
	I0407 13:02:13.592231 1238390 system_pods.go:89] "kube-scheduler-embed-certs-717935" [06c059d7-d1a1-4980-bd5c-6098a47a1dff] Running
	I0407 13:02:13.592237 1238390 system_pods.go:89] "storage-provisioner" [3ba6ab67-8297-4e38-8269-c509f5ac1124] Running
	I0407 13:02:13.592249 1238390 system_pods.go:126] duration metric: took 200.673127ms to wait for k8s-apps to be running ...
	I0407 13:02:13.592295 1238390 system_svc.go:44] waiting for kubelet service to be running ....
	I0407 13:02:13.592353 1238390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:02:13.605066 1238390 system_svc.go:56] duration metric: took 12.760698ms WaitForService to wait for kubelet
	I0407 13:02:13.605099 1238390 kubeadm.go:582] duration metric: took 9.361209054s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 13:02:13.605118 1238390 node_conditions.go:102] verifying NodePressure condition ...
	I0407 13:02:13.791386 1238390 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0407 13:02:13.791418 1238390 node_conditions.go:123] node cpu capacity is 2
	I0407 13:02:13.791431 1238390 node_conditions.go:105] duration metric: took 186.307848ms to run NodePressure ...
	I0407 13:02:13.791443 1238390 start.go:241] waiting for startup goroutines ...
	I0407 13:02:13.791451 1238390 start.go:246] waiting for cluster config update ...
	I0407 13:02:13.791461 1238390 start.go:255] writing updated cluster config ...
	I0407 13:02:13.791745 1238390 ssh_runner.go:195] Run: rm -f paused
	I0407 13:02:13.852921 1238390 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0407 13:02:13.857062 1238390 out.go:177] * Done! kubectl is now configured to use "embed-certs-717935" cluster and "default" namespace by default
	I0407 13:02:16.529750 1223988 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0407 13:02:16.539005 1223988 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0407 13:02:16.542065 1223988 out.go:201] 
	W0407 13:02:16.545157 1223988 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0407 13:02:16.545250 1223988 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0407 13:02:16.545294 1223988 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0407 13:02:16.545328 1223988 out.go:270] * 
	W0407 13:02:16.546231 1223988 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0407 13:02:16.550123 1223988 out.go:201] 
	
	
	==> Docker <==
	Apr 07 12:57:09 old-k8s-version-907855 dockerd[1152]: time="2025-04-07T12:57:09.673865207Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Apr 07 12:57:09 old-k8s-version-907855 dockerd[1152]: time="2025-04-07T12:57:09.891799113Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Apr 07 12:57:09 old-k8s-version-907855 dockerd[1152]: time="2025-04-07T12:57:09.891905240Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Apr 07 12:57:09 old-k8s-version-907855 dockerd[1152]: time="2025-04-07T12:57:09.891933745Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Apr 07 12:57:32 old-k8s-version-907855 dockerd[1152]: time="2025-04-07T12:57:32.669825123Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Apr 07 12:57:32 old-k8s-version-907855 dockerd[1152]: time="2025-04-07T12:57:32.871637380Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Apr 07 12:57:32 old-k8s-version-907855 dockerd[1152]: time="2025-04-07T12:57:32.871746091Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Apr 07 12:57:32 old-k8s-version-907855 dockerd[1152]: time="2025-04-07T12:57:32.871774572Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Apr 07 12:57:57 old-k8s-version-907855 dockerd[1152]: time="2025-04-07T12:57:57.437620968Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Apr 07 12:57:57 old-k8s-version-907855 dockerd[1152]: time="2025-04-07T12:57:57.437661149Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Apr 07 12:57:57 old-k8s-version-907855 dockerd[1152]: time="2025-04-07T12:57:57.440587621Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Apr 07 12:58:25 old-k8s-version-907855 dockerd[1152]: time="2025-04-07T12:58:25.658183154Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Apr 07 12:58:25 old-k8s-version-907855 dockerd[1152]: time="2025-04-07T12:58:25.858038002Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Apr 07 12:58:25 old-k8s-version-907855 dockerd[1152]: time="2025-04-07T12:58:25.858415558Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Apr 07 12:58:25 old-k8s-version-907855 dockerd[1152]: time="2025-04-07T12:58:25.858458184Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Apr 07 12:59:27 old-k8s-version-907855 dockerd[1152]: time="2025-04-07T12:59:27.429428404Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Apr 07 12:59:27 old-k8s-version-907855 dockerd[1152]: time="2025-04-07T12:59:27.429468544Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Apr 07 12:59:27 old-k8s-version-907855 dockerd[1152]: time="2025-04-07T12:59:27.432170676Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Apr 07 12:59:56 old-k8s-version-907855 dockerd[1152]: time="2025-04-07T12:59:56.648112235Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Apr 07 12:59:56 old-k8s-version-907855 dockerd[1152]: time="2025-04-07T12:59:56.953060009Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Apr 07 12:59:56 old-k8s-version-907855 dockerd[1152]: time="2025-04-07T12:59:56.953150054Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Apr 07 12:59:56 old-k8s-version-907855 dockerd[1152]: time="2025-04-07T12:59:56.953182202Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Apr 07 13:02:16 old-k8s-version-907855 dockerd[1152]: time="2025-04-07T13:02:16.448879546Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Apr 07 13:02:16 old-k8s-version-907855 dockerd[1152]: time="2025-04-07T13:02:16.448922057Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Apr 07 13:02:16 old-k8s-version-907855 dockerd[1152]: time="2025-04-07T13:02:16.456916617Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	71f6bbb993410       ba04bb24b9575                                                                                         5 minutes ago       Running             storage-provisioner       2                   a25876d3f2e91       storage-provisioner
	f4034a5c5e258       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93        5 minutes ago       Running             kubernetes-dashboard      0                   01b2ccc2a64bf       kubernetes-dashboard-cd95d586-dvssg
	02c99fe2d89ed       25a5233254979                                                                                         5 minutes ago       Running             kube-proxy                1                   584bab7dbc7f0       kube-proxy-qskm8
	2983fb057c2b5       1611cd07b61d5                                                                                         5 minutes ago       Running             busybox                   1                   081d5d44d8496       busybox
	e6a43a71b1f6a       db91994f4ee8f                                                                                         5 minutes ago       Running             coredns                   1                   58cc5408657fa       coredns-74ff55c5b-mmgvz
	49a236bde2cb7       ba04bb24b9575                                                                                         5 minutes ago       Exited              storage-provisioner       1                   a25876d3f2e91       storage-provisioner
	499bde040d372       05b738aa1bc63                                                                                         5 minutes ago       Running             etcd                      1                   c4c6e79d59c9e       etcd-old-k8s-version-907855
	0f335eb94cad1       e7605f88f17d6                                                                                         5 minutes ago       Running             kube-scheduler            1                   f230bf828f0f2       kube-scheduler-old-k8s-version-907855
	5abec15abc05e       1df8a2b116bd1                                                                                         5 minutes ago       Running             kube-controller-manager   1                   555b5803f2b33       kube-controller-manager-old-k8s-version-907855
	8e9ca3cf686fa       2c08bbbc02d3a                                                                                         5 minutes ago       Running             kube-apiserver            1                   7377858ce9c96       kube-apiserver-old-k8s-version-907855
	7e76e32f2f700       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   6 minutes ago       Exited              busybox                   0                   a221f8f8c7c8b       busybox
	73c94e36d8a27       db91994f4ee8f                                                                                         8 minutes ago       Exited              coredns                   0                   4fa1cffb0d6ff       coredns-74ff55c5b-mmgvz
	308161cfd1116       25a5233254979                                                                                         8 minutes ago       Exited              kube-proxy                0                   ed77721cc87d8       kube-proxy-qskm8
	9715ae775fae6       e7605f88f17d6                                                                                         8 minutes ago       Exited              kube-scheduler            0                   d0a51754424dd       kube-scheduler-old-k8s-version-907855
	3652c993a04ea       1df8a2b116bd1                                                                                         8 minutes ago       Exited              kube-controller-manager   0                   d87dd8870d999       kube-controller-manager-old-k8s-version-907855
	002b3321c8c93       2c08bbbc02d3a                                                                                         8 minutes ago       Exited              kube-apiserver            0                   9612ca4452acf       kube-apiserver-old-k8s-version-907855
	76fcb451fd44a       05b738aa1bc63                                                                                         8 minutes ago       Exited              etcd                      0                   6fef4f567116c       etcd-old-k8s-version-907855
	
	
	==> coredns [73c94e36d8a2] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	[INFO] Reloading complete
	[INFO] 127.0.0.1:44935 - 64314 "HINFO IN 4975802979400559284.682108104929532584. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.033413393s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	I0407 12:54:45.709824       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-04-07 12:54:15.709225657 +0000 UTC m=+0.059936955) (total time: 30.000486141s):
	Trace[2019727887]: [30.000486141s] [30.000486141s] END
	E0407 12:54:45.709857       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0407 12:54:45.712315       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-04-07 12:54:15.711987176 +0000 UTC m=+0.062698466) (total time: 30.000303532s):
	Trace[939984059]: [30.000303532s] [30.000303532s] END
	I0407 12:54:45.712484       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-04-07 12:54:15.712277363 +0000 UTC m=+0.062988661) (total time: 30.000191276s):
	Trace[911902081]: [30.000191276s] [30.000191276s] END
	E0407 12:54:45.712551       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	E0407 12:54:45.712492       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [e6a43a71b1f6] <==
	I0407 12:57:00.584328       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-04-07 12:56:30.581860555 +0000 UTC m=+0.099318922) (total time: 30.001703838s):
	Trace[2019727887]: [30.001703838s] [30.001703838s] END
	E0407 12:57:00.584380       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0407 12:57:00.584547       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-04-07 12:56:30.582325448 +0000 UTC m=+0.099783807) (total time: 30.002208919s):
	Trace[939984059]: [30.002208919s] [30.002208919s] END
	E0407 12:57:00.584696       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0407 12:57:00.584675       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-04-07 12:56:30.584386548 +0000 UTC m=+0.101844915) (total time: 30.000274172s):
	Trace[911902081]: [30.000274172s] [30.000274172s] END
	E0407 12:57:00.584711       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:34408 - 33424 "HINFO IN 1869271092226084053.5827832347166876870. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031139301s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-907855
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-907855
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=33e6edc58d2014d70e908473920ef4ac8eae1e43
	                    minikube.k8s.io/name=old-k8s-version-907855
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_07T12_53_59_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Apr 2025 12:53:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-907855
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Apr 2025 13:02:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Apr 2025 12:57:18 +0000   Mon, 07 Apr 2025 12:53:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Apr 2025 12:57:18 +0000   Mon, 07 Apr 2025 12:53:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Apr 2025 12:57:18 +0000   Mon, 07 Apr 2025 12:53:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Apr 2025 12:57:18 +0000   Mon, 07 Apr 2025 12:54:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-907855
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 5b1a9cff095c40c0b7f82bfe3d673ed7
	  System UUID:                1cddb69d-b5b9-4807-a44e-4d49b64ab0e9
	  Boot ID:                    48eff5d2-7902-459b-9c1c-54fa612d73c3
	  Kernel Version:             5.15.0-1081-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.0.4
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 coredns-74ff55c5b-mmgvz                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m3s
	  kube-system                 etcd-old-k8s-version-907855                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m14s
	  kube-system                 kube-apiserver-old-k8s-version-907855             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m14s
	  kube-system                 kube-controller-manager-old-k8s-version-907855    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m14s
	  kube-system                 kube-proxy-qskm8                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m3s
	  kube-system                 kube-scheduler-old-k8s-version-907855             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m14s
	  kube-system                 metrics-server-9975d5f86-hpzkf                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m23s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m1s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-rl4hx         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m29s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-dvssg               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (4%)  170Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m29s (x5 over 8m29s)  kubelet     Node old-k8s-version-907855 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m29s (x5 over 8m29s)  kubelet     Node old-k8s-version-907855 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m29s (x5 over 8m29s)  kubelet     Node old-k8s-version-907855 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m15s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m15s                  kubelet     Node old-k8s-version-907855 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m15s                  kubelet     Node old-k8s-version-907855 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m15s                  kubelet     Node old-k8s-version-907855 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m15s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m5s                   kubelet     Node old-k8s-version-907855 status is now: NodeReady
	  Normal  Starting                 8m2s                   kube-proxy  Starting kube-proxy.
	  Normal  Starting                 5m59s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m59s (x8 over 5m59s)  kubelet     Node old-k8s-version-907855 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m59s (x8 over 5m59s)  kubelet     Node old-k8s-version-907855 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m59s (x7 over 5m59s)  kubelet     Node old-k8s-version-907855 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m59s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m46s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Apr 7 11:41] systemd-journald[221]: Failed to send WATCHDOG=1 notification message: Connection refused
	[Apr 7 12:00] hrtimer: interrupt took 3163925 ns
	
	
	==> etcd [499bde040d37] <==
	2025-04-07 12:58:08.124140 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 12:58:18.124238 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 12:58:28.124368 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 12:58:38.124602 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 12:58:48.124188 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 12:58:58.124436 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 12:59:08.124225 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 12:59:18.124431 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 12:59:28.124202 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 12:59:38.124286 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 12:59:48.124316 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 12:59:58.124296 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:00:08.124209 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:00:18.124253 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:00:28.124089 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:00:38.124169 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:00:48.124170 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:00:58.124288 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:01:08.124224 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:01:18.124279 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:01:28.124190 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:01:38.125411 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:01:48.124277 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:01:58.124435 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:02:08.124319 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [76fcb451fd44] <==
	raft2025/04/07 12:53:49 INFO: 9f0758e1c58a86ed became leader at term 2
	raft2025/04/07 12:53:49 INFO: raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2
	2025-04-07 12:53:49.536284 I | etcdserver: setting up the initial cluster version to 3.4
	2025-04-07 12:53:49.538160 N | etcdserver/membership: set the initial cluster version to 3.4
	2025-04-07 12:53:49.538243 I | etcdserver/api: enabled capabilities for version 3.4
	2025-04-07 12:53:49.538281 I | etcdserver: published {Name:old-k8s-version-907855 ClientURLs:[https://192.168.85.2:2379]} to cluster 68eaea490fab4e05
	2025-04-07 12:53:49.538412 I | embed: ready to serve client requests
	2025-04-07 12:53:49.538473 I | embed: ready to serve client requests
	2025-04-07 12:53:49.553402 I | embed: serving client requests on 127.0.0.1:2379
	2025-04-07 12:53:49.553954 I | embed: serving client requests on 192.168.85.2:2379
	2025-04-07 12:53:58.162503 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 12:54:06.716933 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 12:54:11.788349 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 12:54:21.788541 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 12:54:31.788463 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 12:54:41.788568 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 12:54:51.788440 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 12:55:01.788409 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 12:55:11.788589 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 12:55:21.788462 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 12:55:31.788603 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 12:55:41.789103 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 12:55:51.788657 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 12:55:54.639458 N | pkg/osutil: received terminated signal, shutting down...
	2025-04-07 12:55:54.707556 I | etcdserver: skipped leadership transfer for single voting member cluster
	
	
	==> kernel <==
	 13:02:18 up  4:44,  0 users,  load average: 2.44, 2.63, 3.29
	Linux old-k8s-version-907855 5.15.0-1081-aws #88~20.04.1-Ubuntu SMP Fri Mar 28 14:48:25 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [002b3321c8c9] <==
	W0407 12:56:04.279874       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0407 12:56:04.300664       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0407 12:56:04.350426       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0407 12:56:04.357686       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0407 12:56:04.359347       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0407 12:56:04.406658       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0407 12:56:04.412310       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0407 12:56:04.465479       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0407 12:56:04.519127       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0407 12:56:04.528826       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0407 12:56:04.531178       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0407 12:56:04.539022       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0407 12:56:04.539437       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0407 12:56:04.550176       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0407 12:56:04.590426       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0407 12:56:04.591283       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0407 12:56:04.622798       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0407 12:56:04.670868       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0407 12:56:04.676087       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0407 12:56:04.679476       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0407 12:56:04.693136       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0407 12:56:04.704542       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0407 12:56:04.798863       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0407 12:56:04.854191       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0407 12:56:04.885856       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	
	==> kube-apiserver [8e9ca3cf686f] <==
	I0407 12:58:54.643192       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0407 12:58:54.643202       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0407 12:59:26.734836       1 client.go:360] parsed scheme: "passthrough"
	I0407 12:59:26.734904       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0407 12:59:26.734912       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0407 12:59:31.548771       1 handler_proxy.go:102] no RequestInfo found in the context
	E0407 12:59:31.549049       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0407 12:59:31.549067       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0407 12:59:59.837556       1 client.go:360] parsed scheme: "passthrough"
	I0407 12:59:59.837597       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0407 12:59:59.837605       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0407 13:00:41.908987       1 client.go:360] parsed scheme: "passthrough"
	I0407 13:00:41.909030       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0407 13:00:41.909040       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0407 13:01:14.328613       1 client.go:360] parsed scheme: "passthrough"
	I0407 13:01:14.328651       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0407 13:01:14.328660       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0407 13:01:28.509604       1 handler_proxy.go:102] no RequestInfo found in the context
	E0407 13:01:28.509690       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0407 13:01:28.509711       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0407 13:01:53.883949       1 client.go:360] parsed scheme: "passthrough"
	I0407 13:01:53.884005       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0407 13:01:53.884014       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [3652c993a04e] <==
	I0407 12:54:14.069870       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-qskm8"
	I0407 12:54:14.075795       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0407 12:54:14.080033       1 range_allocator.go:373] Set node old-k8s-version-907855 PodCIDR to [10.244.0.0/24]
	I0407 12:54:14.161268       1 shared_informer.go:247] Caches are synced for deployment 
	E0407 12:54:14.218671       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I0407 12:54:14.225394       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0407 12:54:14.243921       1 shared_informer.go:247] Caches are synced for stateful set 
	I0407 12:54:14.252746       1 shared_informer.go:247] Caches are synced for resource quota 
	I0407 12:54:14.261265       1 shared_informer.go:247] Caches are synced for resource quota 
	I0407 12:54:14.280292       1 shared_informer.go:247] Caches are synced for disruption 
	I0407 12:54:14.280311       1 disruption.go:339] Sending events to api server.
	I0407 12:54:14.281237       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0407 12:54:14.364144       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-4bldm"
	I0407 12:54:14.449229       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-mmgvz"
	I0407 12:54:14.909679       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0407 12:54:14.945016       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0407 12:54:14.945038       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0407 12:54:15.009903       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0407 12:54:16.193526       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0407 12:54:16.221990       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-4bldm"
	I0407 12:55:53.475851       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	E0407 12:55:53.730528       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
	E0407 12:55:53.735328       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I0407 12:55:54.564286       1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-9975d5f86-hpzkf"
	E0407 12:55:54.909480       1 replica_set.go:532] sync "kube-system/metrics-server-9975d5f86" failed with Get "https://192.168.85.2:8443/apis/apps/v1/namespaces/kube-system/replicasets/metrics-server-9975d5f86": dial tcp 192.168.85.2:8443: connect: connection refused
	
	
	==> kube-controller-manager [5abec15abc05] <==
	W0407 12:57:53.605358       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0407 12:58:19.652949       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0407 12:58:25.255883       1 request.go:655] Throttling request took 1.048403654s, request: GET:https://192.168.85.2:8443/apis/batch/v1beta1?timeout=32s
	W0407 12:58:26.107356       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0407 12:58:50.154931       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0407 12:58:57.757760       1 request.go:655] Throttling request took 1.048494964s, request: GET:https://192.168.85.2:8443/apis/scheduling.k8s.io/v1?timeout=32s
	W0407 12:58:58.609343       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0407 12:59:20.656906       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0407 12:59:30.270152       1 request.go:655] Throttling request took 1.048320412s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W0407 12:59:31.121770       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0407 12:59:51.159199       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0407 13:00:02.772232       1 request.go:655] Throttling request took 1.044919771s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0407 13:00:03.623698       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0407 13:00:21.661107       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0407 13:00:35.274275       1 request.go:655] Throttling request took 1.048177604s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0407 13:00:36.125726       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0407 13:00:52.163166       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0407 13:01:07.776104       1 request.go:655] Throttling request took 1.048371631s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0407 13:01:08.627754       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0407 13:01:22.665308       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0407 13:01:40.278089       1 request.go:655] Throttling request took 1.047958855s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0407 13:01:41.129831       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0407 13:01:53.170631       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0407 13:02:12.780245       1 request.go:655] Throttling request took 1.048367326s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0407 13:02:13.631761       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-proxy [02c99fe2d89e] <==
	I0407 12:56:31.036107       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0407 12:56:31.036190       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0407 12:56:31.058482       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0407 12:56:31.058627       1 server_others.go:185] Using iptables Proxier.
	I0407 12:56:31.058874       1 server.go:650] Version: v1.20.0
	I0407 12:56:31.059428       1 config.go:315] Starting service config controller
	I0407 12:56:31.059445       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0407 12:56:31.061910       1 config.go:224] Starting endpoint slice config controller
	I0407 12:56:31.061940       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0407 12:56:31.159586       1 shared_informer.go:247] Caches are synced for service config 
	I0407 12:56:31.162110       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-proxy [308161cfd111] <==
	I0407 12:54:15.748456       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0407 12:54:15.748553       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0407 12:54:15.901237       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0407 12:54:15.901344       1 server_others.go:185] Using iptables Proxier.
	I0407 12:54:15.901564       1 server.go:650] Version: v1.20.0
	I0407 12:54:15.902345       1 config.go:315] Starting service config controller
	I0407 12:54:15.902361       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0407 12:54:15.902377       1 config.go:224] Starting endpoint slice config controller
	I0407 12:54:15.902381       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0407 12:54:16.002488       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0407 12:54:16.002563       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [0f335eb94cad] <==
	I0407 12:56:21.639804       1 serving.go:331] Generated self-signed cert in-memory
	W0407 12:56:27.269489       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0407 12:56:27.276679       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0407 12:56:27.276725       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0407 12:56:27.276732       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0407 12:56:27.697156       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0407 12:56:27.697468       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0407 12:56:27.697607       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0407 12:56:27.697743       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0407 12:56:27.997730       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [9715ae775fae] <==
	I0407 12:53:50.783554       1 serving.go:331] Generated self-signed cert in-memory
	W0407 12:53:55.636443       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0407 12:53:55.639176       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0407 12:53:55.639200       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0407 12:53:55.639206       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0407 12:53:55.703047       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0407 12:53:55.703334       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0407 12:53:55.708901       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0407 12:53:55.709096       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0407 12:53:55.714980       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0407 12:53:55.715548       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0407 12:53:55.716119       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0407 12:53:55.716341       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0407 12:53:55.716551       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0407 12:53:55.716657       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0407 12:53:55.716766       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0407 12:53:55.716869       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0407 12:53:55.716734       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0407 12:53:55.717309       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0407 12:53:55.717533       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0407 12:53:55.721797       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0407 12:53:56.556036       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0407 12:53:56.704553       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0407 12:53:56.741105       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0407 12:53:58.503568       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Apr 07 12:59:56 old-k8s-version-907855 kubelet[1490]: E0407 12:59:56.956582    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Apr 07 13:00:06 old-k8s-version-907855 kubelet[1490]: E0407 13:00:06.417090    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 07 13:00:12 old-k8s-version-907855 kubelet[1490]: E0407 13:00:12.417499    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 07 13:00:21 old-k8s-version-907855 kubelet[1490]: E0407 13:00:21.417196    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 07 13:00:24 old-k8s-version-907855 kubelet[1490]: E0407 13:00:24.417156    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 07 13:00:35 old-k8s-version-907855 kubelet[1490]: E0407 13:00:35.417216    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 07 13:00:39 old-k8s-version-907855 kubelet[1490]: E0407 13:00:39.417204    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 07 13:00:46 old-k8s-version-907855 kubelet[1490]: E0407 13:00:46.418573    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 07 13:00:50 old-k8s-version-907855 kubelet[1490]: E0407 13:00:50.416979    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 07 13:01:01 old-k8s-version-907855 kubelet[1490]: E0407 13:01:01.417156    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 07 13:01:03 old-k8s-version-907855 kubelet[1490]: E0407 13:01:03.423671    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 07 13:01:14 old-k8s-version-907855 kubelet[1490]: E0407 13:01:14.418839    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 07 13:01:17 old-k8s-version-907855 kubelet[1490]: E0407 13:01:17.417074    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 07 13:01:28 old-k8s-version-907855 kubelet[1490]: E0407 13:01:28.417882    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 07 13:01:28 old-k8s-version-907855 kubelet[1490]: E0407 13:01:28.418181    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 07 13:01:41 old-k8s-version-907855 kubelet[1490]: E0407 13:01:41.417193    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 07 13:01:42 old-k8s-version-907855 kubelet[1490]: E0407 13:01:42.418831    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 07 13:01:53 old-k8s-version-907855 kubelet[1490]: E0407 13:01:53.423295    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 07 13:01:57 old-k8s-version-907855 kubelet[1490]: E0407 13:01:57.417250    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 07 13:02:05 old-k8s-version-907855 kubelet[1490]: E0407 13:02:05.421584    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 07 13:02:08 old-k8s-version-907855 kubelet[1490]: E0407 13:02:08.425187    1490 pod_workers.go:191] Error syncing pod ed73d674-5412-4f15-ab2f-792e1c2d94ea ("dashboard-metrics-scraper-8d5bb5db8-rl4hx_kubernetes-dashboard(ed73d674-5412-4f15-ab2f-792e1c2d94ea)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 07 13:02:16 old-k8s-version-907855 kubelet[1490]: E0407 13:02:16.462394    1490 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Apr 07 13:02:16 old-k8s-version-907855 kubelet[1490]: E0407 13:02:16.462997    1490 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Apr 07 13:02:16 old-k8s-version-907855 kubelet[1490]: E0407 13:02:16.464229    1490 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-gjsq5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exe
c:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-hpzkf_kube-system(b4b3ca
a2-7507-4a31-bffa-18ed75a13a92): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Apr 07 13:02:16 old-k8s-version-907855 kubelet[1490]: E0407 13:02:16.464434    1490 pod_workers.go:191] Error syncing pod b4b3caa2-7507-4a31-bffa-18ed75a13a92 ("metrics-server-9975d5f86-hpzkf_kube-system(b4b3caa2-7507-4a31-bffa-18ed75a13a92)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	
	
	==> kubernetes-dashboard [f4034a5c5e25] <==
	2025/04/07 12:56:56 Using namespace: kubernetes-dashboard
	2025/04/07 12:56:56 Using in-cluster config to connect to apiserver
	2025/04/07 12:56:56 Using secret token for csrf signing
	2025/04/07 12:56:56 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/04/07 12:56:56 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/04/07 12:56:56 Successful initial request to the apiserver, version: v1.20.0
	2025/04/07 12:56:56 Generating JWE encryption key
	2025/04/07 12:56:56 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/04/07 12:56:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/04/07 12:56:58 Initializing JWE encryption key from synchronized object
	2025/04/07 12:56:58 Creating in-cluster Sidecar client
	2025/04/07 12:56:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/04/07 12:56:58 Serving insecurely on HTTP port: 9090
	2025/04/07 12:57:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/04/07 12:57:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/04/07 12:58:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/04/07 12:58:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/04/07 12:59:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/04/07 12:59:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/04/07 13:00:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/04/07 13:00:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/04/07 13:01:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/04/07 13:01:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/04/07 12:56:56 Starting overwatch
	
	
	==> storage-provisioner [49a236bde2cb] <==
	I0407 12:56:30.935837       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0407 12:57:00.939123       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [71f6bbb99341] <==
	I0407 12:57:12.578057       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0407 12:57:12.624067       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0407 12:57:12.624277       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0407 12:57:30.095012       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0407 12:57:30.096134       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-907855_935b49cf-39f2-44a4-a23d-8e7b6442f087!
	I0407 12:57:30.097350       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6a009a87-9cdd-4596-a112-09a934e31495", APIVersion:"v1", ResourceVersion:"814", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-907855_935b49cf-39f2-44a4-a23d-8e7b6442f087 became leader
	I0407 12:57:30.197242       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-907855_935b49cf-39f2-44a4-a23d-8e7b6442f087!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-907855 -n old-k8s-version-907855
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-907855 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-hpzkf dashboard-metrics-scraper-8d5bb5db8-rl4hx
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-907855 describe pod metrics-server-9975d5f86-hpzkf dashboard-metrics-scraper-8d5bb5db8-rl4hx
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-907855 describe pod metrics-server-9975d5f86-hpzkf dashboard-metrics-scraper-8d5bb5db8-rl4hx: exit status 1 (86.811506ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-hpzkf" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-8d5bb5db8-rl4hx" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-907855 describe pod metrics-server-9975d5f86-hpzkf dashboard-metrics-scraper-8d5bb5db8-rl4hx: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (373.35s)

                                                
                                    

Test pass (319/346)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 5.9
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.22
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.32.2/json-events 4.59
13 TestDownloadOnly/v1.32.2/preload-exists 0
17 TestDownloadOnly/v1.32.2/LogsDuration 0.16
18 TestDownloadOnly/v1.32.2/DeleteAll 0.37
19 TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds 0.22
21 TestBinaryMirror 0.61
22 TestOffline 89.38
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 225.89
29 TestAddons/serial/Volcano 42.2
31 TestAddons/serial/GCPAuth/Namespaces 0.2
32 TestAddons/serial/GCPAuth/FakeCredentials 8.99
35 TestAddons/parallel/Registry 16.42
36 TestAddons/parallel/Ingress 20.68
37 TestAddons/parallel/InspektorGadget 11.87
38 TestAddons/parallel/MetricsServer 6.94
40 TestAddons/parallel/CSI 50.87
41 TestAddons/parallel/Headlamp 17.76
42 TestAddons/parallel/CloudSpanner 5.58
43 TestAddons/parallel/LocalPath 53.94
44 TestAddons/parallel/NvidiaDevicePlugin 5.76
45 TestAddons/parallel/Yakd 11.83
47 TestAddons/StoppedEnableDisable 11.26
48 TestCertOptions 36.6
49 TestCertExpiration 240.73
50 TestDockerFlags 39.21
51 TestForceSystemdFlag 50.68
52 TestForceSystemdEnv 40.96
58 TestErrorSpam/setup 32.06
59 TestErrorSpam/start 0.8
60 TestErrorSpam/status 1.16
61 TestErrorSpam/pause 1.4
62 TestErrorSpam/unpause 1.43
63 TestErrorSpam/stop 2.08
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 73.52
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 35.04
70 TestFunctional/serial/KubeContext 0.07
71 TestFunctional/serial/KubectlGetPods 0.11
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.26
75 TestFunctional/serial/CacheCmd/cache/add_local 0.97
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.64
80 TestFunctional/serial/CacheCmd/cache/delete 0.13
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.16
83 TestFunctional/serial/ExtraConfig 44.39
84 TestFunctional/serial/ComponentHealth 0.11
85 TestFunctional/serial/LogsCmd 1.29
86 TestFunctional/serial/LogsFileCmd 1.29
87 TestFunctional/serial/InvalidService 4.62
89 TestFunctional/parallel/ConfigCmd 0.5
90 TestFunctional/parallel/DashboardCmd 12.86
91 TestFunctional/parallel/DryRun 0.47
92 TestFunctional/parallel/InternationalLanguage 0.26
93 TestFunctional/parallel/StatusCmd 1.36
97 TestFunctional/parallel/ServiceCmdConnect 10.63
98 TestFunctional/parallel/AddonsCmd 0.19
99 TestFunctional/parallel/PersistentVolumeClaim 26.16
101 TestFunctional/parallel/SSHCmd 0.72
102 TestFunctional/parallel/CpCmd 2.43
104 TestFunctional/parallel/FileSync 0.38
105 TestFunctional/parallel/CertSync 2.32
109 TestFunctional/parallel/NodeLabels 0.14
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.3
113 TestFunctional/parallel/License 0.27
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.64
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.46
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/ServiceCmd/DeployApp 6.27
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.55
127 TestFunctional/parallel/ServiceCmd/List 0.64
128 TestFunctional/parallel/ProfileCmd/profile_list 0.53
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.59
130 TestFunctional/parallel/ProfileCmd/profile_json_output 0.53
131 TestFunctional/parallel/ServiceCmd/HTTPS 0.58
132 TestFunctional/parallel/MountCmd/any-port 8.64
133 TestFunctional/parallel/ServiceCmd/Format 0.59
134 TestFunctional/parallel/ServiceCmd/URL 0.42
135 TestFunctional/parallel/MountCmd/specific-port 1.55
136 TestFunctional/parallel/MountCmd/VerifyCleanup 2.43
137 TestFunctional/parallel/Version/short 0.12
138 TestFunctional/parallel/Version/components 1.37
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
143 TestFunctional/parallel/ImageCommands/ImageBuild 3.47
144 TestFunctional/parallel/ImageCommands/Setup 0.85
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.23
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.84
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
150 TestFunctional/parallel/DockerEnv/bash 1.29
151 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.3
152 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.47
153 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
154 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.7
155 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.52
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 131.31
164 TestMultiControlPlane/serial/DeployApp 8.34
165 TestMultiControlPlane/serial/PingHostFromPods 1.79
166 TestMultiControlPlane/serial/AddWorkerNode 27.56
167 TestMultiControlPlane/serial/NodeLabels 0.11
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.03
169 TestMultiControlPlane/serial/CopyFile 19.74
170 TestMultiControlPlane/serial/StopSecondaryNode 11.85
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.77
172 TestMultiControlPlane/serial/RestartSecondaryNode 37.75
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.18
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 207.03
175 TestMultiControlPlane/serial/DeleteSecondaryNode 11.08
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.81
177 TestMultiControlPlane/serial/StopCluster 32.72
178 TestMultiControlPlane/serial/RestartCluster 84.47
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.79
180 TestMultiControlPlane/serial/AddSecondaryNode 49.94
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.07
184 TestImageBuild/serial/Setup 32.32
185 TestImageBuild/serial/NormalBuild 1.78
186 TestImageBuild/serial/BuildWithBuildArg 1
187 TestImageBuild/serial/BuildWithDockerIgnore 0.97
188 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.75
192 TestJSONOutput/start/Command 42.87
193 TestJSONOutput/start/Audit 0
195 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/pause/Command 0.6
199 TestJSONOutput/pause/Audit 0
201 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
204 TestJSONOutput/unpause/Command 0.53
205 TestJSONOutput/unpause/Audit 0
207 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
208 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
210 TestJSONOutput/stop/Command 5.73
211 TestJSONOutput/stop/Audit 0
213 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
214 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
215 TestErrorJSONOutput 0.25
217 TestKicCustomNetwork/create_custom_network 37.97
218 TestKicCustomNetwork/use_default_bridge_network 32.79
219 TestKicExistingNetwork 37.11
220 TestKicCustomSubnet 36.49
221 TestKicStaticIP 31.86
222 TestMainNoArgs 0.06
223 TestMinikubeProfile 71.2
226 TestMountStart/serial/StartWithMountFirst 11.07
227 TestMountStart/serial/VerifyMountFirst 0.26
228 TestMountStart/serial/StartWithMountSecond 8.76
229 TestMountStart/serial/VerifyMountSecond 0.25
230 TestMountStart/serial/DeleteFirst 1.49
231 TestMountStart/serial/VerifyMountPostDelete 0.26
232 TestMountStart/serial/Stop 1.19
233 TestMountStart/serial/RestartStopped 8.05
234 TestMountStart/serial/VerifyMountPostStop 0.26
237 TestMultiNode/serial/FreshStart2Nodes 87.49
238 TestMultiNode/serial/DeployApp2Nodes 37.2
239 TestMultiNode/serial/PingHostFrom2Pods 1.03
240 TestMultiNode/serial/AddNode 18.22
241 TestMultiNode/serial/MultiNodeLabels 0.12
242 TestMultiNode/serial/ProfileList 0.73
243 TestMultiNode/serial/CopyFile 10.24
244 TestMultiNode/serial/StopNode 2.25
245 TestMultiNode/serial/StartAfterStop 11.09
246 TestMultiNode/serial/RestartKeepsNodes 86.05
247 TestMultiNode/serial/DeleteNode 5.45
248 TestMultiNode/serial/StopMultiNode 21.57
249 TestMultiNode/serial/RestartMultiNode 49.83
250 TestMultiNode/serial/ValidateNameConflict 35.82
255 TestPreload 138.81
257 TestScheduledStopUnix 104.73
258 TestSkaffold 117.21
260 TestInsufficientStorage 11.22
261 TestRunningBinaryUpgrade 105.86
263 TestKubernetesUpgrade 128.49
264 TestMissingContainerUpgrade 172.99
266 TestPause/serial/Start 51
267 TestPause/serial/SecondStartNoReconfiguration 34.96
268 TestPause/serial/Pause 0.61
269 TestPause/serial/VerifyStatus 0.37
270 TestPause/serial/Unpause 0.77
271 TestPause/serial/PauseAgain 0.89
272 TestPause/serial/DeletePaused 2.41
273 TestPause/serial/VerifyDeletedResources 0.22
274 TestStoppedBinaryUpgrade/Setup 0.72
275 TestStoppedBinaryUpgrade/Upgrade 95.76
276 TestStoppedBinaryUpgrade/MinikubeLogs 1.99
285 TestNoKubernetes/serial/StartNoK8sWithVersion 0.14
286 TestNoKubernetes/serial/StartWithK8s 42.44
298 TestNoKubernetes/serial/StartWithStopK8s 19.54
299 TestNoKubernetes/serial/Start 8.71
300 TestNoKubernetes/serial/VerifyK8sNotRunning 0.36
301 TestNoKubernetes/serial/ProfileList 1.26
302 TestNoKubernetes/serial/Stop 1.28
303 TestNoKubernetes/serial/StartNoArgs 8.67
304 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.31
306 TestStartStop/group/old-k8s-version/serial/FirstStart 147.46
308 TestStartStop/group/no-preload/serial/FirstStart 52.75
309 TestStartStop/group/old-k8s-version/serial/DeployApp 11.79
310 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.73
311 TestStartStop/group/old-k8s-version/serial/Stop 11.5
312 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.27
314 TestStartStop/group/no-preload/serial/DeployApp 9.5
315 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.84
316 TestStartStop/group/no-preload/serial/Stop 11.23
317 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
318 TestStartStop/group/no-preload/serial/SecondStart 269.95
319 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
320 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
321 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
322 TestStartStop/group/no-preload/serial/Pause 2.99
324 TestStartStop/group/embed-certs/serial/FirstStart 47.47
325 TestStartStop/group/embed-certs/serial/DeployApp 10.46
326 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
327 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.1
328 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.19
329 TestStartStop/group/embed-certs/serial/Stop 11.04
330 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
331 TestStartStop/group/old-k8s-version/serial/Pause 2.64
333 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 48.76
334 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.3
335 TestStartStop/group/embed-certs/serial/SecondStart 270.87
336 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.37
337 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.08
338 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.06
339 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
340 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 302.8
341 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
342 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
343 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
344 TestStartStop/group/embed-certs/serial/Pause 2.84
346 TestStartStop/group/newest-cni/serial/FirstStart 36.13
347 TestStartStop/group/newest-cni/serial/DeployApp 0
348 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.2
349 TestStartStop/group/newest-cni/serial/Stop 11.23
350 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
351 TestStartStop/group/newest-cni/serial/SecondStart 18.82
352 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
353 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
354 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
355 TestStartStop/group/newest-cni/serial/Pause 3.44
356 TestNetworkPlugins/group/auto/Start 79.52
357 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
358 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.13
359 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
360 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.99
361 TestNetworkPlugins/group/kindnet/Start 71.14
362 TestNetworkPlugins/group/auto/KubeletFlags 0.45
363 TestNetworkPlugins/group/auto/NetCatPod 12.44
364 TestNetworkPlugins/group/auto/DNS 0.25
365 TestNetworkPlugins/group/auto/Localhost 0.2
366 TestNetworkPlugins/group/auto/HairPin 0.22
367 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
368 TestNetworkPlugins/group/kindnet/KubeletFlags 0.36
369 TestNetworkPlugins/group/kindnet/NetCatPod 11.36
370 TestNetworkPlugins/group/calico/Start 86.61
371 TestNetworkPlugins/group/kindnet/DNS 0.23
372 TestNetworkPlugins/group/kindnet/Localhost 0.17
373 TestNetworkPlugins/group/kindnet/HairPin 0.18
374 TestNetworkPlugins/group/custom-flannel/Start 65.25
375 TestNetworkPlugins/group/calico/ControllerPod 6.01
376 TestNetworkPlugins/group/calico/KubeletFlags 0.32
377 TestNetworkPlugins/group/calico/NetCatPod 12.28
378 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.39
379 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.4
380 TestNetworkPlugins/group/calico/DNS 0.22
381 TestNetworkPlugins/group/calico/Localhost 0.17
382 TestNetworkPlugins/group/calico/HairPin 0.17
383 TestNetworkPlugins/group/custom-flannel/DNS 0.23
384 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
385 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
386 TestNetworkPlugins/group/false/Start 80.48
387 TestNetworkPlugins/group/enable-default-cni/Start 89.28
388 TestNetworkPlugins/group/false/KubeletFlags 0.3
389 TestNetworkPlugins/group/false/NetCatPod 11.3
390 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
391 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.29
392 TestNetworkPlugins/group/false/DNS 0.26
393 TestNetworkPlugins/group/false/Localhost 0.24
394 TestNetworkPlugins/group/false/HairPin 0.21
395 TestNetworkPlugins/group/enable-default-cni/DNS 0.28
396 TestNetworkPlugins/group/enable-default-cni/Localhost 0.24
397 TestNetworkPlugins/group/enable-default-cni/HairPin 0.27
398 TestNetworkPlugins/group/flannel/Start 64.89
399 TestNetworkPlugins/group/bridge/Start 49.59
400 TestNetworkPlugins/group/bridge/KubeletFlags 0.47
401 TestNetworkPlugins/group/bridge/NetCatPod 11.45
402 TestNetworkPlugins/group/flannel/ControllerPod 6
403 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
404 TestNetworkPlugins/group/flannel/NetCatPod 9.3
405 TestNetworkPlugins/group/bridge/DNS 0.31
406 TestNetworkPlugins/group/bridge/Localhost 0.25
407 TestNetworkPlugins/group/bridge/HairPin 0.24
408 TestNetworkPlugins/group/flannel/DNS 0.25
409 TestNetworkPlugins/group/flannel/Localhost 0.25
410 TestNetworkPlugins/group/flannel/HairPin 0.24
411 TestNetworkPlugins/group/kubenet/Start 73.2
412 TestNetworkPlugins/group/kubenet/KubeletFlags 0.29
413 TestNetworkPlugins/group/kubenet/NetCatPod 9.26
414 TestNetworkPlugins/group/kubenet/DNS 0.18
415 TestNetworkPlugins/group/kubenet/Localhost 0.18
416 TestNetworkPlugins/group/kubenet/HairPin 0.18
x
+
TestDownloadOnly/v1.20.0/json-events (5.9s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-931238 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-931238 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (5.894832032s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (5.90s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0407 12:03:49.034597  907461 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0407 12:03:49.034675  907461 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20602-902080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-931238
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-931238: exit status 85 (96.032456ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-931238 | jenkins | v1.35.0 | 07 Apr 25 12:03 UTC |          |
	|         | -p download-only-931238        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 12:03:43
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 12:03:43.188932  907467 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:03:43.189047  907467 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:03:43.189081  907467 out.go:358] Setting ErrFile to fd 2...
	I0407 12:03:43.189093  907467 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:03:43.189349  907467 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-902080/.minikube/bin
	W0407 12:03:43.189468  907467 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20602-902080/.minikube/config/config.json: open /home/jenkins/minikube-integration/20602-902080/.minikube/config/config.json: no such file or directory
	I0407 12:03:43.189843  907467 out.go:352] Setting JSON to true
	I0407 12:03:43.190665  907467 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":13568,"bootTime":1744013856,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0407 12:03:43.190728  907467 start.go:139] virtualization:  
	I0407 12:03:43.194657  907467 out.go:97] [download-only-931238] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	W0407 12:03:43.194828  907467 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20602-902080/.minikube/cache/preloaded-tarball: no such file or directory
	I0407 12:03:43.194883  907467 notify.go:220] Checking for updates...
	I0407 12:03:43.197876  907467 out.go:169] MINIKUBE_LOCATION=20602
	I0407 12:03:43.200926  907467 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:03:43.203663  907467 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20602-902080/kubeconfig
	I0407 12:03:43.206540  907467 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-902080/.minikube
	I0407 12:03:43.209502  907467 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0407 12:03:43.215085  907467 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0407 12:03:43.215384  907467 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 12:03:43.241170  907467 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0407 12:03:43.241299  907467 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 12:03:43.302728  907467 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:53 SystemTime:2025-04-07 12:03:43.29379728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0407 12:03:43.302831  907467 docker.go:318] overlay module found
	I0407 12:03:43.305704  907467 out.go:97] Using the docker driver based on user configuration
	I0407 12:03:43.305737  907467 start.go:297] selected driver: docker
	I0407 12:03:43.305752  907467 start.go:901] validating driver "docker" against <nil>
	I0407 12:03:43.305850  907467 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 12:03:43.372733  907467 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:53 SystemTime:2025-04-07 12:03:43.36309402 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0407 12:03:43.372954  907467 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0407 12:03:43.373236  907467 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0407 12:03:43.373393  907467 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0407 12:03:43.376564  907467 out.go:169] Using Docker driver with root privileges
	I0407 12:03:43.379395  907467 cni.go:84] Creating CNI manager for ""
	I0407 12:03:43.379466  907467 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0407 12:03:43.379545  907467 start.go:340] cluster config:
	{Name:download-only-931238 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-931238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:03:43.382436  907467 out.go:97] Starting "download-only-931238" primary control-plane node in "download-only-931238" cluster
	I0407 12:03:43.382472  907467 cache.go:121] Beginning downloading kic base image for docker with docker
	I0407 12:03:43.385454  907467 out.go:97] Pulling base image v0.0.46-1743675393-20591 ...
	I0407 12:03:43.385490  907467 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0407 12:03:43.385676  907467 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local docker daemon
	I0407 12:03:43.401392  907467 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 to local cache
	I0407 12:03:43.402191  907467 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local cache directory
	I0407 12:03:43.402296  907467 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 to local cache
	I0407 12:03:43.492926  907467 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0407 12:03:43.493006  907467 cache.go:56] Caching tarball of preloaded images
	I0407 12:03:43.493792  907467 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0407 12:03:43.497028  907467 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0407 12:03:43.497055  907467 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0407 12:03:43.582689  907467 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /home/jenkins/minikube-integration/20602-902080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0407 12:03:47.215845  907467 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0407 12:03:47.215960  907467 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20602-902080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-931238 host does not exist
	  To start a cluster, run: "minikube start -p download-only-931238"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-931238
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/json-events (4.59s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-276411 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-276411 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=docker --driver=docker  --container-runtime=docker: (4.594252504s)
--- PASS: TestDownloadOnly/v1.32.2/json-events (4.59s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/preload-exists
I0407 12:03:54.093140  907461 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
I0407 12:03:54.093185  907461 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20602-902080/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/LogsDuration (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-276411
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-276411: exit status 85 (155.051372ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-931238 | jenkins | v1.35.0 | 07 Apr 25 12:03 UTC |                     |
	|         | -p download-only-931238        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 07 Apr 25 12:03 UTC | 07 Apr 25 12:03 UTC |
	| delete  | -p download-only-931238        | download-only-931238 | jenkins | v1.35.0 | 07 Apr 25 12:03 UTC | 07 Apr 25 12:03 UTC |
	| start   | -o=json --download-only        | download-only-276411 | jenkins | v1.35.0 | 07 Apr 25 12:03 UTC |                     |
	|         | -p download-only-276411        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 12:03:49
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 12:03:49.547344  907668 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:03:49.547526  907668 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:03:49.547553  907668 out.go:358] Setting ErrFile to fd 2...
	I0407 12:03:49.547570  907668 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:03:49.547841  907668 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-902080/.minikube/bin
	I0407 12:03:49.548277  907668 out.go:352] Setting JSON to true
	I0407 12:03:49.549225  907668 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":13574,"bootTime":1744013856,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0407 12:03:49.549319  907668 start.go:139] virtualization:  
	I0407 12:03:49.552669  907668 out.go:97] [download-only-276411] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0407 12:03:49.552917  907668 notify.go:220] Checking for updates...
	I0407 12:03:49.555933  907668 out.go:169] MINIKUBE_LOCATION=20602
	I0407 12:03:49.559079  907668 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:03:49.562159  907668 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20602-902080/kubeconfig
	I0407 12:03:49.565058  907668 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-902080/.minikube
	I0407 12:03:49.567815  907668 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0407 12:03:49.573582  907668 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0407 12:03:49.573894  907668 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 12:03:49.609491  907668 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0407 12:03:49.609622  907668 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 12:03:49.673933  907668 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:46 SystemTime:2025-04-07 12:03:49.664804315 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0407 12:03:49.674042  907668 docker.go:318] overlay module found
	I0407 12:03:49.677097  907668 out.go:97] Using the docker driver based on user configuration
	I0407 12:03:49.677145  907668 start.go:297] selected driver: docker
	I0407 12:03:49.677158  907668 start.go:901] validating driver "docker" against <nil>
	I0407 12:03:49.677278  907668 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 12:03:49.736339  907668 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:46 SystemTime:2025-04-07 12:03:49.727119391 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0407 12:03:49.736610  907668 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0407 12:03:49.737195  907668 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0407 12:03:49.737392  907668 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0407 12:03:49.740528  907668 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-276411 host does not exist
	  To start a cluster, run: "minikube start -p download-only-276411"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.2/LogsDuration (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAll (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.32.2/DeleteAll (0.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-276411
--- PASS: TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I0407 12:03:55.989680  907461 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-663208 --alsologtostderr --binary-mirror http://127.0.0.1:42873 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-663208" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-663208
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestOffline (89.38s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-860245 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-860245 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m26.840759782s)
helpers_test.go:175: Cleaning up "offline-docker-860245" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-860245
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-860245: (2.541575317s)
--- PASS: TestOffline (89.38s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-184883
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-184883: exit status 85 (80.345097ms)

                                                
                                                
-- stdout --
	* Profile "addons-184883" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-184883"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-184883
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-184883: exit status 85 (76.599653ms)

                                                
                                                
-- stdout --
	* Profile "addons-184883" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-184883"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (225.89s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-184883 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-184883 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m45.887669746s)
--- PASS: TestAddons/Setup (225.89s)

                                                
                                    
x
+
TestAddons/serial/Volcano (42.2s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:823: volcano-controller stabilized in 62.506767ms
addons_test.go:815: volcano-admission stabilized in 62.873499ms
addons_test.go:807: volcano-scheduler stabilized in 63.241487ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-75fdd99bcf-2wv7c" [bd21a2d7-86ce-43cd-86de-7bfed99cb77a] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003645903s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-75d8f6b5c-kzzmc" [52cbcc40-c9ee-4db4-8c81-70922f23c8a7] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003442578s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-86bdc5c9c-ljjrf" [3fda96e6-fd94-427f-b667-f8b4a3b4cdb2] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 6.003553206s
addons_test.go:842: (dbg) Run:  kubectl --context addons-184883 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-184883 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-184883 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [c949a350-0723-4f4b-8c3a-141fa05e4948] Pending
helpers_test.go:344: "test-job-nginx-0" [c949a350-0723-4f4b-8c3a-141fa05e4948] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [c949a350-0723-4f4b-8c3a-141fa05e4948] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.004336154s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-184883 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-184883 addons disable volcano --alsologtostderr -v=1: (11.335623584s)
--- PASS: TestAddons/serial/Volcano (42.20s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.2s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-184883 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-184883 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.20s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.99s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-184883 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-184883 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [798aebc5-06fe-4941-83a8-71d517f87def] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [798aebc5-06fe-4941-83a8-71d517f87def] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003458224s
addons_test.go:633: (dbg) Run:  kubectl --context addons-184883 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-184883 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-184883 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-184883 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.99s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 6.16951ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-vmqn2" [963a0b14-7bc9-40ca-aa59-06f94c37b079] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003879874s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-sb7pt" [421bf005-10b9-45a5-962b-4bda985f1714] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003366588s
addons_test.go:331: (dbg) Run:  kubectl --context addons-184883 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-184883 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-184883 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.526373684s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-184883 ip
2025/04/07 12:08:58 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-184883 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.42s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-184883 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-184883 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-184883 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b429a264-5ec4-4147-941f-3ed02cd6384b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b429a264-5ec4-4147-941f-3ed02cd6384b] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003313912s
I0407 12:10:22.871823  907461 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-184883 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-184883 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-184883 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-184883 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-184883 addons disable ingress-dns --alsologtostderr -v=1: (1.35006661s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-184883 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-184883 addons disable ingress --alsologtostderr -v=1: (7.761339723s)
--- PASS: TestAddons/parallel/Ingress (20.68s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.87s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-j2rv8" [39615c81-fa40-47e8-bfc6-d97abf83b2e6] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003740509s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-184883 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-184883 addons disable inspektor-gadget --alsologtostderr -v=1: (5.867546877s)
--- PASS: TestAddons/parallel/InspektorGadget (11.87s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.94s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 4.21442ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-w4xpm" [c2fb5b19-d680-48cf-9d02-f162f75558f3] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003001187s
addons_test.go:402: (dbg) Run:  kubectl --context addons-184883 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-184883 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.94s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.87s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0407 12:09:24.013355  907461 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0407 12:09:24.016951  907461 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0407 12:09:24.016983  907461 kapi.go:107] duration metric: took 7.803906ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 7.814983ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-184883 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-184883 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-184883 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-184883 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-184883 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-184883 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-184883 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-184883 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-184883 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-184883 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-184883 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-184883 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-184883 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-184883 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-184883 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-184883 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-184883 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [738b0785-b667-44f9-9297-a8db4f3247bf] Pending
helpers_test.go:344: "task-pv-pod" [738b0785-b667-44f9-9297-a8db4f3247bf] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [738b0785-b667-44f9-9297-a8db4f3247bf] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003746299s
addons_test.go:511: (dbg) Run:  kubectl --context addons-184883 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-184883 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-184883 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-184883 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-184883 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-184883 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-184883 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-184883 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-184883 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-184883 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-184883 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-184883 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-184883 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-184883 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-184883 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-184883 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-184883 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [9f9f26b4-9784-4fc9-9c12-24790eec8717] Pending
helpers_test.go:344: "task-pv-pod-restore" [9f9f26b4-9784-4fc9-9c12-24790eec8717] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [9f9f26b4-9784-4fc9-9c12-24790eec8717] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00390878s
addons_test.go:553: (dbg) Run:  kubectl --context addons-184883 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-184883 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-184883 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-184883 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-184883 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-184883 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.916721671s)
--- PASS: TestAddons/parallel/CSI (50.87s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-184883 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-184883 --alsologtostderr -v=1: (1.054716945s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-wzghl" [f0b1906f-6d90-4c37-bede-3de366f73bff] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-wzghl" [f0b1906f-6d90-4c37-bede-3de366f73bff] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003123305s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-184883 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-184883 addons disable headlamp --alsologtostderr -v=1: (5.704044525s)
--- PASS: TestAddons/parallel/Headlamp (17.76s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7dc7f9b5b8-7scqw" [e61db9c5-7df6-4b70-bcd9-611123866ebd] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004151652s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-184883 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.94s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-184883 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-184883 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-184883 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-184883 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-184883 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-184883 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-184883 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-184883 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [05885aa6-e048-4d05-9817-49d1d802ba7c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [05885aa6-e048-4d05-9817-49d1d802ba7c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [05885aa6-e048-4d05-9817-49d1d802ba7c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003777949s
addons_test.go:906: (dbg) Run:  kubectl --context addons-184883 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-arm64 -p addons-184883 ssh "cat /opt/local-path-provisioner/pvc-a5b4138e-397b-4352-bb6d-43a52f8cce31_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-184883 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-184883 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-184883 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-184883 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.49375359s)
--- PASS: TestAddons/parallel/LocalPath (53.94s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.76s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-l9w79" [e358db02-d4cc-4b77-accf-888b15528cbc] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004852003s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-184883 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.76s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-gwz2c" [7164c168-c9d2-430d-a110-816f35349059] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003789785s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-184883 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-184883 addons disable yakd --alsologtostderr -v=1: (5.822260844s)
--- PASS: TestAddons/parallel/Yakd (11.83s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.26s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-184883
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-184883: (10.973606377s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-184883
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-184883
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-184883
--- PASS: TestAddons/StoppedEnableDisable (11.26s)

                                                
                                    
x
+
TestCertOptions (36.6s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-736687 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
E0407 12:52:42.616151  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/addons-184883/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-736687 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (33.736978434s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-736687 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-736687 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-736687 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-736687" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-736687
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-736687: (2.167400853s)
--- PASS: TestCertOptions (36.60s)

                                                
                                    
x
+
TestCertExpiration (240.73s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-550861 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-550861 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (35.297375788s)
E0407 12:52:19.986261  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/functional-020915/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-550861 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-550861 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (23.232745341s)
helpers_test.go:175: Cleaning up "cert-expiration-550861" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-550861
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-550861: (2.202271388s)
--- PASS: TestCertExpiration (240.73s)

                                                
                                    
x
+
TestDockerFlags (39.21s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-367418 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-367418 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (36.34044102s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-367418 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-367418 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-367418" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-367418
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-367418: (2.254092185s)
--- PASS: TestDockerFlags (39.21s)

                                                
                                    
x
+
TestForceSystemdFlag (50.68s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-775898 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0407 12:50:21.458207  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/skaffold-259543/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-775898 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (47.429935246s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-775898 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-775898" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-775898
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-775898: (2.773887615s)
--- PASS: TestForceSystemdFlag (50.68s)

                                                
                                    
x
+
TestForceSystemdEnv (40.96s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-540178 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-540178 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (38.208848675s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-540178 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-540178" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-540178
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-540178: (2.270898717s)
--- PASS: TestForceSystemdEnv (40.96s)

                                                
                                    
x
+
TestErrorSpam/setup (32.06s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-296957 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-296957 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-296957 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-296957 --driver=docker  --container-runtime=docker: (32.057721952s)
--- PASS: TestErrorSpam/setup (32.06s)

                                                
                                    
x
+
TestErrorSpam/start (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-296957 --log_dir /tmp/nospam-296957 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-296957 --log_dir /tmp/nospam-296957 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-296957 --log_dir /tmp/nospam-296957 start --dry-run
--- PASS: TestErrorSpam/start (0.80s)

                                                
                                    
x
+
TestErrorSpam/status (1.16s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-296957 --log_dir /tmp/nospam-296957 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-296957 --log_dir /tmp/nospam-296957 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-296957 --log_dir /tmp/nospam-296957 status
--- PASS: TestErrorSpam/status (1.16s)

                                                
                                    
x
+
TestErrorSpam/pause (1.4s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-296957 --log_dir /tmp/nospam-296957 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-296957 --log_dir /tmp/nospam-296957 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-296957 --log_dir /tmp/nospam-296957 pause
--- PASS: TestErrorSpam/pause (1.40s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-296957 --log_dir /tmp/nospam-296957 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-296957 --log_dir /tmp/nospam-296957 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-296957 --log_dir /tmp/nospam-296957 unpause
--- PASS: TestErrorSpam/unpause (1.43s)

                                                
                                    
x
+
TestErrorSpam/stop (2.08s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-296957 --log_dir /tmp/nospam-296957 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-296957 --log_dir /tmp/nospam-296957 stop: (1.870989664s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-296957 --log_dir /tmp/nospam-296957 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-296957 --log_dir /tmp/nospam-296957 stop
--- PASS: TestErrorSpam/stop (2.08s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20602-902080/.minikube/files/etc/test/nested/copy/907461/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (73.52s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-arm64 start -p functional-020915 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2251: (dbg) Done: out/minikube-linux-arm64 start -p functional-020915 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m13.52329874s)
--- PASS: TestFunctional/serial/StartWithProxy (73.52s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.04s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0407 12:12:41.390342  907461 config.go:182] Loaded profile config "functional-020915": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
functional_test.go:676: (dbg) Run:  out/minikube-linux-arm64 start -p functional-020915 --alsologtostderr -v=8
E0407 12:12:42.618479  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/addons-184883/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:12:42.625542  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/addons-184883/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:12:42.636952  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/addons-184883/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:12:42.658305  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/addons-184883/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:12:42.700461  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/addons-184883/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:12:42.781885  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/addons-184883/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:12:42.943280  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/addons-184883/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:12:43.265111  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/addons-184883/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:12:43.906659  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/addons-184883/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:12:45.187998  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/addons-184883/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:12:47.749863  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/addons-184883/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:12:52.871319  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/addons-184883/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:13:03.113243  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/addons-184883/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:676: (dbg) Done: out/minikube-linux-arm64 start -p functional-020915 --alsologtostderr -v=8: (35.036321667s)
functional_test.go:680: soft start took 35.039024867s for "functional-020915" cluster.
I0407 12:13:16.426980  907461 config.go:182] Loaded profile config "functional-020915": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/SoftStart (35.04s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-020915 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-linux-arm64 -p functional-020915 cache add registry.k8s.io/pause:3.1: (1.208880868s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-arm64 -p functional-020915 cache add registry.k8s.io/pause:3.3: (1.147748699s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.97s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-020915 /tmp/TestFunctionalserialCacheCmdcacheadd_local2883172599/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 cache add minikube-local-cache-test:functional-020915
functional_test.go:1111: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 cache delete minikube-local-cache-test:functional-020915
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-020915
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.97s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-020915 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (291.855484ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 cache reload
functional_test.go:1180: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 kubectl -- --context functional-020915 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-020915 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (44.39s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-arm64 start -p functional-020915 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0407 12:13:23.594890  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/addons-184883/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:14:04.556238  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/addons-184883/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-linux-arm64 start -p functional-020915 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.387493894s)
functional_test.go:778: restart took 44.387597829s for "functional-020915" cluster.
I0407 12:14:07.741755  907461 config.go:182] Loaded profile config "functional-020915": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/ExtraConfig (44.39s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-020915 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-arm64 -p functional-020915 logs: (1.287688419s)
--- PASS: TestFunctional/serial/LogsCmd (1.29s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 logs --file /tmp/TestFunctionalserialLogsFileCmd1521764871/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-arm64 -p functional-020915 logs --file /tmp/TestFunctionalserialLogsFileCmd1521764871/001/logs.txt: (1.287402671s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.29s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.62s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-020915 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-020915
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-020915: exit status 115 (587.242374ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30340 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-020915 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.62s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-020915 config get cpus: exit status 14 (89.502022ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-020915 config get cpus: exit status 14 (72.479329ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-020915 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-020915 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 950289: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.86s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-020915 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-020915 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (214.776468ms)

                                                
                                                
-- stdout --
	* [functional-020915] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20602
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20602-902080/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-902080/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 12:14:48.184225  950001 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:14:48.184407  950001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:14:48.184429  950001 out.go:358] Setting ErrFile to fd 2...
	I0407 12:14:48.184461  950001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:14:48.184908  950001 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-902080/.minikube/bin
	I0407 12:14:48.185394  950001 out.go:352] Setting JSON to false
	I0407 12:14:48.186469  950001 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":14233,"bootTime":1744013856,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0407 12:14:48.186579  950001 start.go:139] virtualization:  
	I0407 12:14:48.189955  950001 out.go:177] * [functional-020915] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0407 12:14:48.193909  950001 notify.go:220] Checking for updates...
	I0407 12:14:48.194779  950001 out.go:177]   - MINIKUBE_LOCATION=20602
	I0407 12:14:48.198460  950001 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:14:48.201384  950001 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20602-902080/kubeconfig
	I0407 12:14:48.204366  950001 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-902080/.minikube
	I0407 12:14:48.207891  950001 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0407 12:14:48.210919  950001 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 12:14:48.214317  950001 config.go:182] Loaded profile config "functional-020915": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 12:14:48.214932  950001 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 12:14:48.242996  950001 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0407 12:14:48.243117  950001 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 12:14:48.309839  950001 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-04-07 12:14:48.298790301 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0407 12:14:48.309953  950001 docker.go:318] overlay module found
	I0407 12:14:48.312978  950001 out.go:177] * Using the docker driver based on existing profile
	I0407 12:14:48.315889  950001 start.go:297] selected driver: docker
	I0407 12:14:48.315911  950001 start.go:901] validating driver "docker" against &{Name:functional-020915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-020915 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:14:48.316012  950001 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 12:14:48.319969  950001 out.go:201] 
	W0407 12:14:48.322948  950001 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0407 12:14:48.325933  950001 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-arm64 start -p functional-020915 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 start -p functional-020915 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-020915 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (255.966821ms)

                                                
                                                
-- stdout --
	* [functional-020915] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20602
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20602-902080/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-902080/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 12:14:47.921841  949887 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:14:47.922058  949887 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:14:47.922082  949887 out.go:358] Setting ErrFile to fd 2...
	I0407 12:14:47.922100  949887 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:14:47.923279  949887 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-902080/.minikube/bin
	I0407 12:14:47.923999  949887 out.go:352] Setting JSON to false
	I0407 12:14:47.926003  949887 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":14232,"bootTime":1744013856,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0407 12:14:47.926119  949887 start.go:139] virtualization:  
	I0407 12:14:47.931489  949887 out.go:177] * [functional-020915] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	I0407 12:14:47.935194  949887 out.go:177]   - MINIKUBE_LOCATION=20602
	I0407 12:14:47.936475  949887 notify.go:220] Checking for updates...
	I0407 12:14:47.942030  949887 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:14:47.945007  949887 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20602-902080/kubeconfig
	I0407 12:14:47.947857  949887 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-902080/.minikube
	I0407 12:14:47.950805  949887 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0407 12:14:47.953790  949887 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 12:14:47.957275  949887 config.go:182] Loaded profile config "functional-020915": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 12:14:47.957813  949887 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 12:14:47.999425  949887 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0407 12:14:47.999548  949887 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 12:14:48.092619  949887 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-04-07 12:14:48.082215275 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0407 12:14:48.092723  949887 docker.go:318] overlay module found
	I0407 12:14:48.096502  949887 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0407 12:14:48.099605  949887 start.go:297] selected driver: docker
	I0407 12:14:48.099632  949887 start.go:901] validating driver "docker" against &{Name:functional-020915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-020915 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:14:48.099727  949887 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 12:14:48.103366  949887 out.go:201] 
	W0407 12:14:48.106896  949887 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0407 12:14:48.110657  949887 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1644: (dbg) Run:  kubectl --context functional-020915 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-020915 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-8449669db6-x2c49" [237a0c39-d621-4da1-9401-5a123dabc481] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-8449669db6-x2c49" [237a0c39-d621-4da1-9401-5a123dabc481] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.00325815s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.49.2:32357
functional_test.go:1692: http://192.168.49.2:32357: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-8449669db6-x2c49

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32357
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.63s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [b9a3a260-d1a0-4294-ab19-370cd0a729ad] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003506248s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-020915 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-020915 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-020915 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-020915 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [abc9c12c-54a0-44b6-ad30-4239798172fa] Pending
helpers_test.go:344: "sp-pod" [abc9c12c-54a0-44b6-ad30-4239798172fa] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [abc9c12c-54a0-44b6-ad30-4239798172fa] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004117395s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-020915 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-020915 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-020915 delete -f testdata/storage-provisioner/pod.yaml: (1.116009905s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-020915 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6f93fbda-c2c3-4b55-8805-d901c55bb82f] Pending
helpers_test.go:344: "sp-pod" [6f93fbda-c2c3-4b55-8805-d901c55bb82f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6f93fbda-c2c3-4b55-8805-d901c55bb82f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003008865s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-020915 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.16s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 ssh -n functional-020915 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 cp functional-020915:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd327847982/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 ssh -n functional-020915 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 ssh -n functional-020915 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.43s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/907461/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 ssh "sudo cat /etc/test/nested/copy/907461/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/907461.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 ssh "sudo cat /etc/ssl/certs/907461.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/907461.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 ssh "sudo cat /usr/share/ca-certificates/907461.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/9074612.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 ssh "sudo cat /etc/ssl/certs/9074612.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/9074612.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 ssh "sudo cat /usr/share/ca-certificates/9074612.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.32s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-020915 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 ssh "sudo systemctl is-active crio"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-020915 ssh "sudo systemctl is-active crio": exit status 1 (299.410818ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-020915 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-020915 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-020915 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 947318: os: process already finished
helpers_test.go:502: unable to terminate pid 947133: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-020915 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-020915 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-020915 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [632b1946-bd7b-4052-bec8-446b4afa3688] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [632b1946-bd7b-4052-bec8-446b4afa3688] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003720417s
I0407 12:14:26.378242  907461 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-020915 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.141.69 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-020915 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1454: (dbg) Run:  kubectl --context functional-020915 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-020915 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64fc58db8c-sngrk" [52224d54-b0f7-423c-8d00-5bff763b1bc6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64fc58db8c-sngrk" [52224d54-b0f7-423c-8d00-5bff763b1bc6] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.004211041s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1332: Took "457.169454ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1346: Took "76.789306ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 service list -o json
functional_test.go:1511: Took "587.790656ms" to run "out/minikube-linux-arm64 -p functional-020915 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1383: Took "438.598815ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1396: Took "92.653266ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.49.2:32733
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-020915 /tmp/TestFunctionalparallelMountCmdany-port1502394164/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1744028085264914925" to /tmp/TestFunctionalparallelMountCmdany-port1502394164/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1744028085264914925" to /tmp/TestFunctionalparallelMountCmdany-port1502394164/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1744028085264914925" to /tmp/TestFunctionalparallelMountCmdany-port1502394164/001/test-1744028085264914925
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-020915 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (617.628736ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0407 12:14:45.884849  907461 retry.go:31] will retry after 325.334753ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr  7 12:14 created-by-test
-rw-r--r-- 1 docker docker 24 Apr  7 12:14 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr  7 12:14 test-1744028085264914925
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 ssh cat /mount-9p/test-1744028085264914925
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-020915 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [b48fa579-6a27-4112-bdfd-661d839cd156] Pending
helpers_test.go:344: "busybox-mount" [b48fa579-6a27-4112-bdfd-661d839cd156] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [b48fa579-6a27-4112-bdfd-661d839cd156] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [b48fa579-6a27-4112-bdfd-661d839cd156] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003295616s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-020915 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-020915 /tmp/TestFunctionalparallelMountCmdany-port1502394164/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.49.2:32733
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-020915 /tmp/TestFunctionalparallelMountCmdspecific-port2844934200/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-020915 /tmp/TestFunctionalparallelMountCmdspecific-port2844934200/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-020915 ssh "sudo umount -f /mount-9p": exit status 1 (359.923441ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-020915 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-020915 /tmp/TestFunctionalparallelMountCmdspecific-port2844934200/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-020915 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4203544635/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-020915 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4203544635/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-020915 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4203544635/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-020915 ssh "findmnt -T" /mount1: exit status 1 (959.961476ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0407 12:14:56.418345  907461 retry.go:31] will retry after 371.84949ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-020915 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-020915 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4203544635/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-020915 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4203544635/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-020915 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4203544635/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.43s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 version --short
--- PASS: TestFunctional/parallel/Version/short (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 version -o=json --components
functional_test.go:2287: (dbg) Done: out/minikube-linux-arm64 -p functional-020915 version -o=json --components: (1.370024725s)
--- PASS: TestFunctional/parallel/Version/components (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-020915 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.2
registry.k8s.io/kube-proxy:v1.32.2
registry.k8s.io/kube-controller-manager:v1.32.2
registry.k8s.io/kube-apiserver:v1.32.2
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-020915
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-020915
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-020915 image ls --format short --alsologtostderr:
I0407 12:15:06.100445  953142 out.go:345] Setting OutFile to fd 1 ...
I0407 12:15:06.100579  953142 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:15:06.100590  953142 out.go:358] Setting ErrFile to fd 2...
I0407 12:15:06.100595  953142 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:15:06.100914  953142 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-902080/.minikube/bin
I0407 12:15:06.101990  953142 config.go:182] Loaded profile config "functional-020915": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 12:15:06.102126  953142 config.go:182] Loaded profile config "functional-020915": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 12:15:06.104403  953142 cli_runner.go:164] Run: docker container inspect functional-020915 --format={{.State.Status}}
I0407 12:15:06.123266  953142 ssh_runner.go:195] Run: systemctl --version
I0407 12:15:06.123333  953142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-020915
I0407 12:15:06.141914  953142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33891 SSHKeyPath:/home/jenkins/minikube-integration/20602-902080/.minikube/machines/functional-020915/id_rsa Username:docker}
I0407 12:15:06.233619  953142 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-020915 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/kube-scheduler              | v1.32.2           | 82dfa03f692fb | 67.9MB |
| docker.io/library/nginx                     | latest            | 2c9168b3c9a84 | 197MB  |
| registry.k8s.io/etcd                        | 3.5.16-0          | 7fc9d4aa817aa | 142MB  |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| docker.io/kicbase/echo-server               | functional-020915 | ce2d2cda2d858 | 4.78MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/kube-controller-manager     | v1.32.2           | 3c9285acfd2ff | 87.2MB |
| registry.k8s.io/kube-proxy                  | v1.32.2           | e5aac5df76d9b | 97.1MB |
| docker.io/library/nginx                     | alpine            | cedb667e1a7b4 | 49.4MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| docker.io/library/minikube-local-cache-test | functional-020915 | 0b1d353a6ddff | 30B    |
| registry.k8s.io/kube-apiserver              | v1.32.2           | 6417e1437b6d9 | 93.9MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-020915 image ls --format table --alsologtostderr:
I0407 12:15:07.125436  953414 out.go:345] Setting OutFile to fd 1 ...
I0407 12:15:07.125669  953414 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:15:07.125697  953414 out.go:358] Setting ErrFile to fd 2...
I0407 12:15:07.125714  953414 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:15:07.126023  953414 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-902080/.minikube/bin
I0407 12:15:07.126744  953414 config.go:182] Loaded profile config "functional-020915": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 12:15:07.126930  953414 config.go:182] Loaded profile config "functional-020915": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 12:15:07.127426  953414 cli_runner.go:164] Run: docker container inspect functional-020915 --format={{.State.Status}}
I0407 12:15:07.148185  953414 ssh_runner.go:195] Run: systemctl --version
I0407 12:15:07.148240  953414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-020915
I0407 12:15:07.173220  953414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33891 SSHKeyPath:/home/jenkins/minikube-integration/20602-902080/.minikube/machines/functional-020915/id_rsa Username:docker}
I0407 12:15:07.262075  953414 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-020915 image ls --format json --alsologtostderr:
[{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.2"],"size":"93900000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"2c9168b3c9a84851f91e03534dc4136951e9f581ab3ac8ee38b28b49ad57ba38","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"197000000"},{"id":"7fc9d4aa817aa6a3e549f3cd49d1f7b4964
07be979fc36dd5f356d59ce8c3a82","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"142000000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911","repoDigests":[],"repoTags":["registry.k8s.io/kube-sch
eduler:v1.32.2"],"size":"67900000"},{"id":"3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.2"],"size":"87200000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-020915"],"size":"4780000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"0b1d353a6ddff86f086881fecca83601135e8e0a44fcdfb4efe428490f799c68","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-020915"],"size":"30"},{"id":"e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.32.2"],"size":"97100000"},{"id":"cedb667e1a7b4e6d843a4f74f1f2db0dac1c29b43978aa72dbae2193e3b8eea3","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"si
ze":"49400000"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-020915 image ls --format json --alsologtostderr:
I0407 12:15:06.878079  953316 out.go:345] Setting OutFile to fd 1 ...
I0407 12:15:06.878282  953316 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:15:06.878288  953316 out.go:358] Setting ErrFile to fd 2...
I0407 12:15:06.878293  953316 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:15:06.878738  953316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-902080/.minikube/bin
I0407 12:15:06.879350  953316 config.go:182] Loaded profile config "functional-020915": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 12:15:06.879473  953316 config.go:182] Loaded profile config "functional-020915": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 12:15:06.880004  953316 cli_runner.go:164] Run: docker container inspect functional-020915 --format={{.State.Status}}
I0407 12:15:06.902372  953316 ssh_runner.go:195] Run: systemctl --version
I0407 12:15:06.902435  953316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-020915
I0407 12:15:06.921489  953316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33891 SSHKeyPath:/home/jenkins/minikube-integration/20602-902080/.minikube/machines/functional-020915/id_rsa Username:docker}
I0407 12:15:07.024675  953316 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-020915 image ls --format yaml --alsologtostderr:
- id: 6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.2
size: "93900000"
- id: 82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.2
size: "67900000"
- id: 3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.2
size: "87200000"
- id: 2c9168b3c9a84851f91e03534dc4136951e9f581ab3ac8ee38b28b49ad57ba38
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "197000000"
- id: 7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "142000000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.32.2
size: "97100000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 0b1d353a6ddff86f086881fecca83601135e8e0a44fcdfb4efe428490f799c68
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-020915
size: "30"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-020915
size: "4780000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: cedb667e1a7b4e6d843a4f74f1f2db0dac1c29b43978aa72dbae2193e3b8eea3
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "49400000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-020915 image ls --format yaml --alsologtostderr:
I0407 12:15:06.608025  953259 out.go:345] Setting OutFile to fd 1 ...
I0407 12:15:06.608145  953259 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:15:06.608158  953259 out.go:358] Setting ErrFile to fd 2...
I0407 12:15:06.608171  953259 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:15:06.608525  953259 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-902080/.minikube/bin
I0407 12:15:06.611491  953259 config.go:182] Loaded profile config "functional-020915": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 12:15:06.611660  953259 config.go:182] Loaded profile config "functional-020915": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 12:15:06.612500  953259 cli_runner.go:164] Run: docker container inspect functional-020915 --format={{.State.Status}}
I0407 12:15:06.634229  953259 ssh_runner.go:195] Run: systemctl --version
I0407 12:15:06.634290  953259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-020915
I0407 12:15:06.670799  953259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33891 SSHKeyPath:/home/jenkins/minikube-integration/20602-902080/.minikube/machines/functional-020915/id_rsa Username:docker}
I0407 12:15:06.765899  953259 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-020915 ssh pgrep buildkitd: exit status 1 (334.689815ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 image build -t localhost/my-image:functional-020915 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-arm64 -p functional-020915 image build -t localhost/my-image:functional-020915 testdata/build --alsologtostderr: (2.923683444s)
functional_test.go:340: (dbg) Stderr: out/minikube-linux-arm64 -p functional-020915 image build -t localhost/my-image:functional-020915 testdata/build --alsologtostderr:
I0407 12:15:06.677225  953264 out.go:345] Setting OutFile to fd 1 ...
I0407 12:15:06.678102  953264 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:15:06.678139  953264 out.go:358] Setting ErrFile to fd 2...
I0407 12:15:06.678159  953264 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:15:06.678457  953264 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-902080/.minikube/bin
I0407 12:15:06.679129  953264 config.go:182] Loaded profile config "functional-020915": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 12:15:06.680908  953264 config.go:182] Loaded profile config "functional-020915": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 12:15:06.681488  953264 cli_runner.go:164] Run: docker container inspect functional-020915 --format={{.State.Status}}
I0407 12:15:06.703137  953264 ssh_runner.go:195] Run: systemctl --version
I0407 12:15:06.703188  953264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-020915
I0407 12:15:06.727878  953264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33891 SSHKeyPath:/home/jenkins/minikube-integration/20602-902080/.minikube/machines/functional-020915/id_rsa Username:docker}
I0407 12:15:06.814279  953264 build_images.go:161] Building image from path: /tmp/build.1783423162.tar
I0407 12:15:06.814353  953264 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0407 12:15:06.828894  953264 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1783423162.tar
I0407 12:15:06.833670  953264 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1783423162.tar: stat -c "%s %y" /var/lib/minikube/build/build.1783423162.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1783423162.tar': No such file or directory
I0407 12:15:06.833700  953264 ssh_runner.go:362] scp /tmp/build.1783423162.tar --> /var/lib/minikube/build/build.1783423162.tar (3072 bytes)
I0407 12:15:06.873888  953264 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1783423162
I0407 12:15:06.884547  953264 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1783423162 -xf /var/lib/minikube/build/build.1783423162.tar
I0407 12:15:06.895772  953264 docker.go:360] Building image: /var/lib/minikube/build/build.1783423162
I0407 12:15:06.895850  953264 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-020915 /var/lib/minikube/build/build.1783423162
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:b2164c107c65a99e622fa892854316fb890a7f96ccdc7bdcce436ef25811196b done
#8 naming to localhost/my-image:functional-020915 done
#8 DONE 0.1s
I0407 12:15:09.496083  953264 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-020915 /var/lib/minikube/build/build.1783423162: (2.600212471s)
I0407 12:15:09.496158  953264 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1783423162
I0407 12:15:09.506419  953264 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1783423162.tar
I0407 12:15:09.515577  953264 build_images.go:217] Built localhost/my-image:functional-020915 from /tmp/build.1783423162.tar
I0407 12:15:09.515610  953264 build_images.go:133] succeeded building to: functional-020915
I0407 12:15:09.515615  953264 build_images.go:134] failed building to: 
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-020915
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 image load --daemon kicbase/echo-server:functional-020915 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 image load --daemon kicbase/echo-server:functional-020915 --alsologtostderr
2025/04/07 12:15:01 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:516: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-020915 docker-env) && out/minikube-linux-arm64 status -p functional-020915"
functional_test.go:539: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-020915 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-020915
functional_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 image load --daemon kicbase/echo-server:functional-020915 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 image save kicbase/echo-server:functional-020915 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 image rm kicbase/echo-server:functional-020915 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-020915
functional_test.go:441: (dbg) Run:  out/minikube-linux-arm64 -p functional-020915 image save --daemon kicbase/echo-server:functional-020915 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect kicbase/echo-server:functional-020915
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.52s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-020915
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-020915
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-020915
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (131.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-131043 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0407 12:15:26.478248  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/addons-184883/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-131043 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m10.377252905s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (131.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-131043 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-131043 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-131043 -- rollout status deployment/busybox: (5.098711051s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-131043 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-131043 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-131043 -- exec busybox-58667487b6-5csnw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-131043 -- exec busybox-58667487b6-g94bc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-131043 -- exec busybox-58667487b6-zw479 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-131043 -- exec busybox-58667487b6-5csnw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-131043 -- exec busybox-58667487b6-g94bc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-131043 -- exec busybox-58667487b6-zw479 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-131043 -- exec busybox-58667487b6-5csnw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-131043 -- exec busybox-58667487b6-g94bc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-131043 -- exec busybox-58667487b6-zw479 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-131043 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-131043 -- exec busybox-58667487b6-5csnw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-131043 -- exec busybox-58667487b6-5csnw -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-131043 -- exec busybox-58667487b6-g94bc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-131043 -- exec busybox-58667487b6-g94bc -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-131043 -- exec busybox-58667487b6-zw479 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-131043 -- exec busybox-58667487b6-zw479 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (27.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-131043 -v=7 --alsologtostderr
E0407 12:17:42.616778  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/addons-184883/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-131043 -v=7 --alsologtostderr: (26.521079409s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-131043 status -v=7 --alsologtostderr: (1.037502452s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (27.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-131043 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.032071481s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 status --output json -v=7 --alsologtostderr
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-131043 status --output json -v=7 --alsologtostderr: (1.019402135s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 cp testdata/cp-test.txt ha-131043:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 ssh -n ha-131043 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 cp ha-131043:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile637656205/001/cp-test_ha-131043.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 ssh -n ha-131043 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 cp ha-131043:/home/docker/cp-test.txt ha-131043-m02:/home/docker/cp-test_ha-131043_ha-131043-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 ssh -n ha-131043 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 ssh -n ha-131043-m02 "sudo cat /home/docker/cp-test_ha-131043_ha-131043-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 cp ha-131043:/home/docker/cp-test.txt ha-131043-m03:/home/docker/cp-test_ha-131043_ha-131043-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 ssh -n ha-131043 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 ssh -n ha-131043-m03 "sudo cat /home/docker/cp-test_ha-131043_ha-131043-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 cp ha-131043:/home/docker/cp-test.txt ha-131043-m04:/home/docker/cp-test_ha-131043_ha-131043-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 ssh -n ha-131043 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 ssh -n ha-131043-m04 "sudo cat /home/docker/cp-test_ha-131043_ha-131043-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 cp testdata/cp-test.txt ha-131043-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 ssh -n ha-131043-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 cp ha-131043-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile637656205/001/cp-test_ha-131043-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 ssh -n ha-131043-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 cp ha-131043-m02:/home/docker/cp-test.txt ha-131043:/home/docker/cp-test_ha-131043-m02_ha-131043.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 ssh -n ha-131043-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 ssh -n ha-131043 "sudo cat /home/docker/cp-test_ha-131043-m02_ha-131043.txt"
E0407 12:18:10.321610  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/addons-184883/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 cp ha-131043-m02:/home/docker/cp-test.txt ha-131043-m03:/home/docker/cp-test_ha-131043-m02_ha-131043-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 ssh -n ha-131043-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 ssh -n ha-131043-m03 "sudo cat /home/docker/cp-test_ha-131043-m02_ha-131043-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 cp ha-131043-m02:/home/docker/cp-test.txt ha-131043-m04:/home/docker/cp-test_ha-131043-m02_ha-131043-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 ssh -n ha-131043-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 ssh -n ha-131043-m04 "sudo cat /home/docker/cp-test_ha-131043-m02_ha-131043-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 cp testdata/cp-test.txt ha-131043-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 ssh -n ha-131043-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 cp ha-131043-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile637656205/001/cp-test_ha-131043-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 ssh -n ha-131043-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 cp ha-131043-m03:/home/docker/cp-test.txt ha-131043:/home/docker/cp-test_ha-131043-m03_ha-131043.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 ssh -n ha-131043-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 ssh -n ha-131043 "sudo cat /home/docker/cp-test_ha-131043-m03_ha-131043.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 cp ha-131043-m03:/home/docker/cp-test.txt ha-131043-m02:/home/docker/cp-test_ha-131043-m03_ha-131043-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 ssh -n ha-131043-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 ssh -n ha-131043-m02 "sudo cat /home/docker/cp-test_ha-131043-m03_ha-131043-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 cp ha-131043-m03:/home/docker/cp-test.txt ha-131043-m04:/home/docker/cp-test_ha-131043-m03_ha-131043-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 ssh -n ha-131043-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 ssh -n ha-131043-m04 "sudo cat /home/docker/cp-test_ha-131043-m03_ha-131043-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 cp testdata/cp-test.txt ha-131043-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 ssh -n ha-131043-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 cp ha-131043-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile637656205/001/cp-test_ha-131043-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 ssh -n ha-131043-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 cp ha-131043-m04:/home/docker/cp-test.txt ha-131043:/home/docker/cp-test_ha-131043-m04_ha-131043.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 ssh -n ha-131043-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 ssh -n ha-131043 "sudo cat /home/docker/cp-test_ha-131043-m04_ha-131043.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 cp ha-131043-m04:/home/docker/cp-test.txt ha-131043-m02:/home/docker/cp-test_ha-131043-m04_ha-131043-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 ssh -n ha-131043-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 ssh -n ha-131043-m02 "sudo cat /home/docker/cp-test_ha-131043-m04_ha-131043-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 cp ha-131043-m04:/home/docker/cp-test.txt ha-131043-m03:/home/docker/cp-test_ha-131043-m04_ha-131043-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 ssh -n ha-131043-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 ssh -n ha-131043-m03 "sudo cat /home/docker/cp-test_ha-131043-m04_ha-131043-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-131043 node stop m02 -v=7 --alsologtostderr: (11.108175494s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-131043 status -v=7 --alsologtostderr: exit status 7 (740.535853ms)

                                                
                                                
-- stdout --
	ha-131043
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-131043-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-131043-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-131043-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 12:18:33.470764  976599 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:18:33.470960  976599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:18:33.470988  976599 out.go:358] Setting ErrFile to fd 2...
	I0407 12:18:33.471005  976599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:18:33.471305  976599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-902080/.minikube/bin
	I0407 12:18:33.471552  976599 out.go:352] Setting JSON to false
	I0407 12:18:33.471610  976599 mustload.go:65] Loading cluster: ha-131043
	I0407 12:18:33.472054  976599 config.go:182] Loaded profile config "ha-131043": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 12:18:33.472096  976599 status.go:174] checking status of ha-131043 ...
	I0407 12:18:33.472353  976599 notify.go:220] Checking for updates...
	I0407 12:18:33.472745  976599 cli_runner.go:164] Run: docker container inspect ha-131043 --format={{.State.Status}}
	I0407 12:18:33.494615  976599 status.go:371] ha-131043 host status = "Running" (err=<nil>)
	I0407 12:18:33.494638  976599 host.go:66] Checking if "ha-131043" exists ...
	I0407 12:18:33.494948  976599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-131043
	I0407 12:18:33.519603  976599 host.go:66] Checking if "ha-131043" exists ...
	I0407 12:18:33.520054  976599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 12:18:33.520106  976599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-131043
	I0407 12:18:33.545561  976599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33896 SSHKeyPath:/home/jenkins/minikube-integration/20602-902080/.minikube/machines/ha-131043/id_rsa Username:docker}
	I0407 12:18:33.642717  976599 ssh_runner.go:195] Run: systemctl --version
	I0407 12:18:33.647308  976599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 12:18:33.660688  976599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 12:18:33.718922  976599 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-04-07 12:18:33.709029629 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0407 12:18:33.719457  976599 kubeconfig.go:125] found "ha-131043" server: "https://192.168.49.254:8443"
	I0407 12:18:33.719495  976599 api_server.go:166] Checking apiserver status ...
	I0407 12:18:33.719547  976599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:18:33.731753  976599 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2530/cgroup
	I0407 12:18:33.741619  976599 api_server.go:182] apiserver freezer: "11:freezer:/docker/7c5e45c42e1b5069006545094130fd8d64a3bdc97a4826b7a070226ab4853103/kubepods/burstable/podfffe739746d5159f8ce77d5dcd8dec7d/38e208b9d94e6e4cacca062015ddfa7011752b8a35fa0d521b584f4e61f3ad34"
	I0407 12:18:33.741693  976599 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7c5e45c42e1b5069006545094130fd8d64a3bdc97a4826b7a070226ab4853103/kubepods/burstable/podfffe739746d5159f8ce77d5dcd8dec7d/38e208b9d94e6e4cacca062015ddfa7011752b8a35fa0d521b584f4e61f3ad34/freezer.state
	I0407 12:18:33.750835  976599 api_server.go:204] freezer state: "THAWED"
	I0407 12:18:33.750862  976599 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0407 12:18:33.758682  976599 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0407 12:18:33.758724  976599 status.go:463] ha-131043 apiserver status = Running (err=<nil>)
	I0407 12:18:33.758756  976599 status.go:176] ha-131043 status: &{Name:ha-131043 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 12:18:33.758777  976599 status.go:174] checking status of ha-131043-m02 ...
	I0407 12:18:33.759127  976599 cli_runner.go:164] Run: docker container inspect ha-131043-m02 --format={{.State.Status}}
	I0407 12:18:33.778289  976599 status.go:371] ha-131043-m02 host status = "Stopped" (err=<nil>)
	I0407 12:18:33.778311  976599 status.go:384] host is not running, skipping remaining checks
	I0407 12:18:33.778319  976599 status.go:176] ha-131043-m02 status: &{Name:ha-131043-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 12:18:33.778339  976599 status.go:174] checking status of ha-131043-m03 ...
	I0407 12:18:33.778655  976599 cli_runner.go:164] Run: docker container inspect ha-131043-m03 --format={{.State.Status}}
	I0407 12:18:33.796934  976599 status.go:371] ha-131043-m03 host status = "Running" (err=<nil>)
	I0407 12:18:33.796960  976599 host.go:66] Checking if "ha-131043-m03" exists ...
	I0407 12:18:33.797268  976599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-131043-m03
	I0407 12:18:33.820180  976599 host.go:66] Checking if "ha-131043-m03" exists ...
	I0407 12:18:33.820515  976599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 12:18:33.820594  976599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-131043-m03
	I0407 12:18:33.838602  976599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33906 SSHKeyPath:/home/jenkins/minikube-integration/20602-902080/.minikube/machines/ha-131043-m03/id_rsa Username:docker}
	I0407 12:18:33.934505  976599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 12:18:33.946787  976599 kubeconfig.go:125] found "ha-131043" server: "https://192.168.49.254:8443"
	I0407 12:18:33.946818  976599 api_server.go:166] Checking apiserver status ...
	I0407 12:18:33.946905  976599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:18:33.958408  976599 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2295/cgroup
	I0407 12:18:33.968053  976599 api_server.go:182] apiserver freezer: "11:freezer:/docker/99f621976c3a0851e190093506e514d9dbcd0bdff42407bfb506da749cb395b9/kubepods/burstable/pod8d811f2ede64c3ca788c3753b314912e/a34ac966c8eda39ba62e7ebc380175730120b65a967b527c74c5e67478561c60"
	I0407 12:18:33.968133  976599 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/99f621976c3a0851e190093506e514d9dbcd0bdff42407bfb506da749cb395b9/kubepods/burstable/pod8d811f2ede64c3ca788c3753b314912e/a34ac966c8eda39ba62e7ebc380175730120b65a967b527c74c5e67478561c60/freezer.state
	I0407 12:18:33.977026  976599 api_server.go:204] freezer state: "THAWED"
	I0407 12:18:33.977054  976599 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0407 12:18:33.985323  976599 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0407 12:18:33.985354  976599 status.go:463] ha-131043-m03 apiserver status = Running (err=<nil>)
	I0407 12:18:33.985363  976599 status.go:176] ha-131043-m03 status: &{Name:ha-131043-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 12:18:33.985380  976599 status.go:174] checking status of ha-131043-m04 ...
	I0407 12:18:33.985691  976599 cli_runner.go:164] Run: docker container inspect ha-131043-m04 --format={{.State.Status}}
	I0407 12:18:34.003955  976599 status.go:371] ha-131043-m04 host status = "Running" (err=<nil>)
	I0407 12:18:34.003983  976599 host.go:66] Checking if "ha-131043-m04" exists ...
	I0407 12:18:34.004328  976599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-131043-m04
	I0407 12:18:34.025488  976599 host.go:66] Checking if "ha-131043-m04" exists ...
	I0407 12:18:34.025828  976599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 12:18:34.025919  976599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-131043-m04
	I0407 12:18:34.046771  976599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33911 SSHKeyPath:/home/jenkins/minikube-integration/20602-902080/.minikube/machines/ha-131043-m04/id_rsa Username:docker}
	I0407 12:18:34.134688  976599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 12:18:34.147060  976599 status.go:176] ha-131043-m04 status: &{Name:ha-131043-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (37.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-131043 node start m02 -v=7 --alsologtostderr: (36.252307052s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-131043 status -v=7 --alsologtostderr: (1.335099967s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (37.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.182902752s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (207.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-131043 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-131043 -v=7 --alsologtostderr
E0407 12:19:16.921977  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/functional-020915/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:19:16.928302  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/functional-020915/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:19:16.939627  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/functional-020915/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:19:16.961055  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/functional-020915/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:19:17.002420  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/functional-020915/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:19:17.083788  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/functional-020915/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:19:17.245211  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/functional-020915/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:19:17.566806  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/functional-020915/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:19:18.208751  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/functional-020915/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:19:19.490033  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/functional-020915/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:19:22.052138  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/functional-020915/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:19:27.174100  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/functional-020915/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:19:37.415572  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/functional-020915/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-131043 -v=7 --alsologtostderr: (34.660780682s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-131043 --wait=true -v=7 --alsologtostderr
E0407 12:19:57.897048  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/functional-020915/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:20:38.858985  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/functional-020915/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:22:00.780268  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/functional-020915/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-131043 --wait=true -v=7 --alsologtostderr: (2m52.155149995s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-131043
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (207.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 node delete m03 -v=7 --alsologtostderr
E0407 12:22:42.616621  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/addons-184883/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-131043 node delete m03 -v=7 --alsologtostderr: (10.171951111s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-131043 stop -v=7 --alsologtostderr: (32.607800776s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-131043 status -v=7 --alsologtostderr: exit status 7 (110.580094ms)

                                                
                                                
-- stdout --
	ha-131043
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-131043-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-131043-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 12:23:25.432295 1005046 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:23:25.432482 1005046 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:23:25.432508 1005046 out.go:358] Setting ErrFile to fd 2...
	I0407 12:23:25.432528 1005046 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:23:25.432956 1005046 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-902080/.minikube/bin
	I0407 12:23:25.433481 1005046 out.go:352] Setting JSON to false
	I0407 12:23:25.433551 1005046 mustload.go:65] Loading cluster: ha-131043
	I0407 12:23:25.433715 1005046 notify.go:220] Checking for updates...
	I0407 12:23:25.434140 1005046 config.go:182] Loaded profile config "ha-131043": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 12:23:25.434181 1005046 status.go:174] checking status of ha-131043 ...
	I0407 12:23:25.434752 1005046 cli_runner.go:164] Run: docker container inspect ha-131043 --format={{.State.Status}}
	I0407 12:23:25.454294 1005046 status.go:371] ha-131043 host status = "Stopped" (err=<nil>)
	I0407 12:23:25.454314 1005046 status.go:384] host is not running, skipping remaining checks
	I0407 12:23:25.454320 1005046 status.go:176] ha-131043 status: &{Name:ha-131043 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 12:23:25.454345 1005046 status.go:174] checking status of ha-131043-m02 ...
	I0407 12:23:25.454648 1005046 cli_runner.go:164] Run: docker container inspect ha-131043-m02 --format={{.State.Status}}
	I0407 12:23:25.475034 1005046 status.go:371] ha-131043-m02 host status = "Stopped" (err=<nil>)
	I0407 12:23:25.475055 1005046 status.go:384] host is not running, skipping remaining checks
	I0407 12:23:25.475062 1005046 status.go:176] ha-131043-m02 status: &{Name:ha-131043-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 12:23:25.475080 1005046 status.go:174] checking status of ha-131043-m04 ...
	I0407 12:23:25.475373 1005046 cli_runner.go:164] Run: docker container inspect ha-131043-m04 --format={{.State.Status}}
	I0407 12:23:25.492721 1005046 status.go:371] ha-131043-m04 host status = "Stopped" (err=<nil>)
	I0407 12:23:25.492744 1005046 status.go:384] host is not running, skipping remaining checks
	I0407 12:23:25.492751 1005046 status.go:176] ha-131043-m04 status: &{Name:ha-131043-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (84.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-131043 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0407 12:24:16.922784  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/functional-020915/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:24:44.622255  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/functional-020915/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-131043 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m23.491223555s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (84.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (49.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-131043 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-131043 --control-plane -v=7 --alsologtostderr: (48.850414096s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-131043 status -v=7 --alsologtostderr
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-131043 status -v=7 --alsologtostderr: (1.085137013s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (49.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.07302781s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.07s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (32.32s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-501221 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-501221 --driver=docker  --container-runtime=docker: (32.321877657s)
--- PASS: TestImageBuild/serial/Setup (32.32s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.78s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-501221
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-501221: (1.778459168s)
--- PASS: TestImageBuild/serial/NormalBuild (1.78s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-501221
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.00s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.97s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-501221
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.97s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.75s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-501221
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.75s)

                                                
                                    
x
+
TestJSONOutput/start/Command (42.87s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-567956 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-567956 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (42.86466088s)
--- PASS: TestJSONOutput/start/Command (42.87s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-567956 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.53s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-567956 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.53s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.73s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-567956 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-567956 --output=json --user=testUser: (5.725121272s)
--- PASS: TestJSONOutput/stop/Command (5.73s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-711050 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-711050 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (93.005587ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7c6f82bf-efd1-40f4-b069-3de091cad888","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-711050] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ed543905-f978-4493-bc9b-a95c59c1966d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20602"}}
	{"specversion":"1.0","id":"b8f52ab8-78a4-43cf-ad4c-3a6b65e51d9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"bccc91b0-5a8e-4ba4-a510-42610f4c539b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20602-902080/kubeconfig"}}
	{"specversion":"1.0","id":"8035fc72-2d2a-461b-9e89-71c0c6b3c6e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-902080/.minikube"}}
	{"specversion":"1.0","id":"9fa5fba8-3eb2-410e-a257-5a6764af4ed7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"5b31f176-e378-416d-b5dd-d9254e51357c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f01f9037-5393-483d-9f7e-b5f95f026eb6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-711050" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-711050
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (37.97s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-265942 --network=
E0407 12:27:42.616985  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/addons-184883/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-265942 --network=: (35.706028181s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-265942" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-265942
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-265942: (2.226280011s)
--- PASS: TestKicCustomNetwork/create_custom_network (37.97s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (32.79s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-506129 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-506129 --network=bridge: (30.736422559s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-506129" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-506129
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-506129: (2.032084678s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (32.79s)

                                                
                                    
x
+
TestKicExistingNetwork (37.11s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0407 12:28:31.527804  907461 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0407 12:28:31.544363  907461 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0407 12:28:31.545212  907461 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0407 12:28:31.545890  907461 cli_runner.go:164] Run: docker network inspect existing-network
W0407 12:28:31.562453  907461 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0407 12:28:31.562480  907461 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0407 12:28:31.562497  907461 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0407 12:28:31.563524  907461 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0407 12:28:31.584753  907461 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ac0706c6046e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1a:7e:5d:ac:03:df} reservation:<nil>}
I0407 12:28:31.587477  907461 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I0407 12:28:31.587847  907461 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001da1950}
I0407 12:28:31.588457  907461 network_create.go:124] attempt to create docker network existing-network 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
I0407 12:28:31.588532  907461 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0407 12:28:31.667802  907461 network_create.go:108] docker network existing-network 192.168.67.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-826717 --network=existing-network
E0407 12:29:05.683211  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/addons-184883/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-826717 --network=existing-network: (34.830248461s)
helpers_test.go:175: Cleaning up "existing-network-826717" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-826717
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-826717: (2.105162495s)
I0407 12:29:08.620038  907461 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (37.11s)

                                                
                                    
x
+
TestKicCustomSubnet (36.49s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-005100 --subnet=192.168.60.0/24
E0407 12:29:16.924681  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/functional-020915/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-005100 --subnet=192.168.60.0/24: (34.37998105s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-005100 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-005100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-005100
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-005100: (2.083301745s)
--- PASS: TestKicCustomSubnet (36.49s)

                                                
                                    
x
+
TestKicStaticIP (31.86s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-587505 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-587505 --static-ip=192.168.200.200: (29.589034715s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-587505 ip
helpers_test.go:175: Cleaning up "static-ip-587505" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-587505
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-587505: (2.124167733s)
--- PASS: TestKicStaticIP (31.86s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (71.2s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-278195 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-278195 --driver=docker  --container-runtime=docker: (30.474702693s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-280755 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-280755 --driver=docker  --container-runtime=docker: (34.795997205s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-278195
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-280755
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-280755" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-280755
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-280755: (2.252605417s)
helpers_test.go:175: Cleaning up "first-278195" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-278195
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-278195: (2.198567422s)
--- PASS: TestMinikubeProfile (71.20s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (11.07s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-246464 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-246464 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (10.074493078s)
--- PASS: TestMountStart/serial/StartWithMountFirst (11.07s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-246464 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.76s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-248343 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-248343 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.758958234s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-248343 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.49s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-246464 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-246464 --alsologtostderr -v=5: (1.494514172s)
--- PASS: TestMountStart/serial/DeleteFirst (1.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-248343 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-248343
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-248343: (1.194817852s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.05s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-248343
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-248343: (7.046410139s)
--- PASS: TestMountStart/serial/RestartStopped (8.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-248343 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (87.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-946754 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0407 12:32:42.616508  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/addons-184883/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-946754 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m26.826600377s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (87.49s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (37.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-946754 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-946754 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-946754 -- rollout status deployment/busybox: (4.339250979s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-946754 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0407 12:33:33.932379  907461 retry.go:31] will retry after 1.383917441s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-946754 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0407 12:33:35.482511  907461 retry.go:31] will retry after 1.823039299s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-946754 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0407 12:33:37.465940  907461 retry.go:31] will retry after 1.13627924s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-946754 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0407 12:33:38.767725  907461 retry.go:31] will retry after 3.010456581s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-946754 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0407 12:33:41.936499  907461 retry.go:31] will retry after 3.855586429s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-946754 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0407 12:33:45.951781  907461 retry.go:31] will retry after 3.99496456s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-946754 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0407 12:33:50.101107  907461 retry.go:31] will retry after 14.403478447s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-946754 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-946754 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-946754 -- exec busybox-58667487b6-j2bg4 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-946754 -- exec busybox-58667487b6-tpjgl -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-946754 -- exec busybox-58667487b6-j2bg4 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-946754 -- exec busybox-58667487b6-tpjgl -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-946754 -- exec busybox-58667487b6-j2bg4 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-946754 -- exec busybox-58667487b6-tpjgl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (37.20s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-946754 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-946754 -- exec busybox-58667487b6-j2bg4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-946754 -- exec busybox-58667487b6-j2bg4 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-946754 -- exec busybox-58667487b6-tpjgl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-946754 -- exec busybox-58667487b6-tpjgl -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-946754 -v 3 --alsologtostderr
E0407 12:34:16.922743  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/functional-020915/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-946754 -v 3 --alsologtostderr: (17.457522059s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.22s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-946754 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.12s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 cp testdata/cp-test.txt multinode-946754:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 ssh -n multinode-946754 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 cp multinode-946754:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1510578170/001/cp-test_multinode-946754.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 ssh -n multinode-946754 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 cp multinode-946754:/home/docker/cp-test.txt multinode-946754-m02:/home/docker/cp-test_multinode-946754_multinode-946754-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 ssh -n multinode-946754 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 ssh -n multinode-946754-m02 "sudo cat /home/docker/cp-test_multinode-946754_multinode-946754-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 cp multinode-946754:/home/docker/cp-test.txt multinode-946754-m03:/home/docker/cp-test_multinode-946754_multinode-946754-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 ssh -n multinode-946754 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 ssh -n multinode-946754-m03 "sudo cat /home/docker/cp-test_multinode-946754_multinode-946754-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 cp testdata/cp-test.txt multinode-946754-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 ssh -n multinode-946754-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 cp multinode-946754-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1510578170/001/cp-test_multinode-946754-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 ssh -n multinode-946754-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 cp multinode-946754-m02:/home/docker/cp-test.txt multinode-946754:/home/docker/cp-test_multinode-946754-m02_multinode-946754.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 ssh -n multinode-946754-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 ssh -n multinode-946754 "sudo cat /home/docker/cp-test_multinode-946754-m02_multinode-946754.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 cp multinode-946754-m02:/home/docker/cp-test.txt multinode-946754-m03:/home/docker/cp-test_multinode-946754-m02_multinode-946754-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 ssh -n multinode-946754-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 ssh -n multinode-946754-m03 "sudo cat /home/docker/cp-test_multinode-946754-m02_multinode-946754-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 cp testdata/cp-test.txt multinode-946754-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 ssh -n multinode-946754-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 cp multinode-946754-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1510578170/001/cp-test_multinode-946754-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 ssh -n multinode-946754-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 cp multinode-946754-m03:/home/docker/cp-test.txt multinode-946754:/home/docker/cp-test_multinode-946754-m03_multinode-946754.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 ssh -n multinode-946754-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 ssh -n multinode-946754 "sudo cat /home/docker/cp-test_multinode-946754-m03_multinode-946754.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 cp multinode-946754-m03:/home/docker/cp-test.txt multinode-946754-m02:/home/docker/cp-test_multinode-946754-m03_multinode-946754-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 ssh -n multinode-946754-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 ssh -n multinode-946754-m02 "sudo cat /home/docker/cp-test_multinode-946754-m03_multinode-946754-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.24s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-946754 node stop m03: (1.211317665s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-946754 status: exit status 7 (523.628372ms)

                                                
                                                
-- stdout --
	multinode-946754
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-946754-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-946754-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-946754 status --alsologtostderr: exit status 7 (516.700894ms)

                                                
                                                
-- stdout --
	multinode-946754
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-946754-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-946754-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 12:34:38.439933 1083394 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:34:38.440141 1083394 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:34:38.440169 1083394 out.go:358] Setting ErrFile to fd 2...
	I0407 12:34:38.440187 1083394 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:34:38.440474 1083394 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-902080/.minikube/bin
	I0407 12:34:38.440739 1083394 out.go:352] Setting JSON to false
	I0407 12:34:38.440858 1083394 mustload.go:65] Loading cluster: multinode-946754
	I0407 12:34:38.440925 1083394 notify.go:220] Checking for updates...
	I0407 12:34:38.441325 1083394 config.go:182] Loaded profile config "multinode-946754": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 12:34:38.441368 1083394 status.go:174] checking status of multinode-946754 ...
	I0407 12:34:38.442130 1083394 cli_runner.go:164] Run: docker container inspect multinode-946754 --format={{.State.Status}}
	I0407 12:34:38.462043 1083394 status.go:371] multinode-946754 host status = "Running" (err=<nil>)
	I0407 12:34:38.462070 1083394 host.go:66] Checking if "multinode-946754" exists ...
	I0407 12:34:38.462369 1083394 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-946754
	I0407 12:34:38.479788 1083394 host.go:66] Checking if "multinode-946754" exists ...
	I0407 12:34:38.480089 1083394 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 12:34:38.480132 1083394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-946754
	I0407 12:34:38.504335 1083394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34021 SSHKeyPath:/home/jenkins/minikube-integration/20602-902080/.minikube/machines/multinode-946754/id_rsa Username:docker}
	I0407 12:34:38.594220 1083394 ssh_runner.go:195] Run: systemctl --version
	I0407 12:34:38.598624 1083394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 12:34:38.610059 1083394 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 12:34:38.684556 1083394 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-04-07 12:34:38.675925419 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0407 12:34:38.685181 1083394 kubeconfig.go:125] found "multinode-946754" server: "https://192.168.58.2:8443"
	I0407 12:34:38.685215 1083394 api_server.go:166] Checking apiserver status ...
	I0407 12:34:38.685260 1083394 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:34:38.697509 1083394 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2335/cgroup
	I0407 12:34:38.707166 1083394 api_server.go:182] apiserver freezer: "11:freezer:/docker/a0d02cf3e5d5a65737c2239a5509f9b960c688147088ec6a8499fb810dd4cf7c/kubepods/burstable/pod22947003d4aec73b0ea647a7acef76be/ec706bacdadf68ca5abae13473eed316746634533db7532c35bacbfdf449808e"
	I0407 12:34:38.707232 1083394 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a0d02cf3e5d5a65737c2239a5509f9b960c688147088ec6a8499fb810dd4cf7c/kubepods/burstable/pod22947003d4aec73b0ea647a7acef76be/ec706bacdadf68ca5abae13473eed316746634533db7532c35bacbfdf449808e/freezer.state
	I0407 12:34:38.715833 1083394 api_server.go:204] freezer state: "THAWED"
	I0407 12:34:38.715876 1083394 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0407 12:34:38.723689 1083394 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0407 12:34:38.723723 1083394 status.go:463] multinode-946754 apiserver status = Running (err=<nil>)
	I0407 12:34:38.723734 1083394 status.go:176] multinode-946754 status: &{Name:multinode-946754 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 12:34:38.723758 1083394 status.go:174] checking status of multinode-946754-m02 ...
	I0407 12:34:38.724064 1083394 cli_runner.go:164] Run: docker container inspect multinode-946754-m02 --format={{.State.Status}}
	I0407 12:34:38.742345 1083394 status.go:371] multinode-946754-m02 host status = "Running" (err=<nil>)
	I0407 12:34:38.742372 1083394 host.go:66] Checking if "multinode-946754-m02" exists ...
	I0407 12:34:38.742682 1083394 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-946754-m02
	I0407 12:34:38.759592 1083394 host.go:66] Checking if "multinode-946754-m02" exists ...
	I0407 12:34:38.759903 1083394 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 12:34:38.759960 1083394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-946754-m02
	I0407 12:34:38.777763 1083394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34026 SSHKeyPath:/home/jenkins/minikube-integration/20602-902080/.minikube/machines/multinode-946754-m02/id_rsa Username:docker}
	I0407 12:34:38.865999 1083394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 12:34:38.878101 1083394 status.go:176] multinode-946754-m02 status: &{Name:multinode-946754-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0407 12:34:38.878135 1083394 status.go:174] checking status of multinode-946754-m03 ...
	I0407 12:34:38.878476 1083394 cli_runner.go:164] Run: docker container inspect multinode-946754-m03 --format={{.State.Status}}
	I0407 12:34:38.898781 1083394 status.go:371] multinode-946754-m03 host status = "Stopped" (err=<nil>)
	I0407 12:34:38.898809 1083394 status.go:384] host is not running, skipping remaining checks
	I0407 12:34:38.898816 1083394 status.go:176] multinode-946754-m03 status: &{Name:multinode-946754-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-946754 node start m03 -v=7 --alsologtostderr: (10.336249635s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.09s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (86.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-946754
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-946754
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-946754: (22.741816721s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-946754 --wait=true -v=8 --alsologtostderr
E0407 12:35:39.984399  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/functional-020915/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-946754 --wait=true -v=8 --alsologtostderr: (1m3.165517327s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-946754
--- PASS: TestMultiNode/serial/RestartKeepsNodes (86.05s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-946754 node delete m03: (4.681105989s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.45s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-946754 stop: (21.377502922s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-946754 status: exit status 7 (99.151856ms)

                                                
                                                
-- stdout --
	multinode-946754
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-946754-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-946754 status --alsologtostderr: exit status 7 (91.828704ms)

                                                
                                                
-- stdout --
	multinode-946754
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-946754-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 12:36:43.014006 1097260 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:36:43.014121 1097260 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:36:43.014132 1097260 out.go:358] Setting ErrFile to fd 2...
	I0407 12:36:43.014136 1097260 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:36:43.014452 1097260 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-902080/.minikube/bin
	I0407 12:36:43.014657 1097260 out.go:352] Setting JSON to false
	I0407 12:36:43.014696 1097260 mustload.go:65] Loading cluster: multinode-946754
	I0407 12:36:43.014741 1097260 notify.go:220] Checking for updates...
	I0407 12:36:43.015097 1097260 config.go:182] Loaded profile config "multinode-946754": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 12:36:43.015118 1097260 status.go:174] checking status of multinode-946754 ...
	I0407 12:36:43.015592 1097260 cli_runner.go:164] Run: docker container inspect multinode-946754 --format={{.State.Status}}
	I0407 12:36:43.034859 1097260 status.go:371] multinode-946754 host status = "Stopped" (err=<nil>)
	I0407 12:36:43.034880 1097260 status.go:384] host is not running, skipping remaining checks
	I0407 12:36:43.034887 1097260 status.go:176] multinode-946754 status: &{Name:multinode-946754 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 12:36:43.034913 1097260 status.go:174] checking status of multinode-946754-m02 ...
	I0407 12:36:43.035227 1097260 cli_runner.go:164] Run: docker container inspect multinode-946754-m02 --format={{.State.Status}}
	I0407 12:36:43.056247 1097260 status.go:371] multinode-946754-m02 host status = "Stopped" (err=<nil>)
	I0407 12:36:43.056269 1097260 status.go:384] host is not running, skipping remaining checks
	I0407 12:36:43.056277 1097260 status.go:176] multinode-946754-m02 status: &{Name:multinode-946754-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.57s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-946754 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-946754 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (49.164221139s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-946754 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.83s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-946754
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-946754-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-946754-m02 --driver=docker  --container-runtime=docker: exit status 14 (103.938618ms)

                                                
                                                
-- stdout --
	* [multinode-946754-m02] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20602
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20602-902080/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-902080/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-946754-m02' is duplicated with machine name 'multinode-946754-m02' in profile 'multinode-946754'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-946754-m03 --driver=docker  --container-runtime=docker
E0407 12:37:42.617091  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/addons-184883/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-946754-m03 --driver=docker  --container-runtime=docker: (33.058665836s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-946754
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-946754: exit status 80 (352.29161ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-946754 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-946754-m03 already exists in multinode-946754-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-946754-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-946754-m03: (2.245000283s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.82s)

                                                
                                    
x
+
TestPreload (138.81s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-426812 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0407 12:39:16.922259  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/functional-020915/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-426812 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m42.138693866s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-426812 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-426812 image pull gcr.io/k8s-minikube/busybox: (2.355008514s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-426812
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-426812: (10.815073828s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-426812 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-426812 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (20.787706633s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-426812 image list
helpers_test.go:175: Cleaning up "test-preload-426812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-426812
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-426812: (2.35620107s)
--- PASS: TestPreload (138.81s)

                                                
                                    
x
+
TestScheduledStopUnix (104.73s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-964930 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-964930 --memory=2048 --driver=docker  --container-runtime=docker: (31.464890082s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-964930 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-964930 -n scheduled-stop-964930
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-964930 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0407 12:41:03.601031  907461 retry.go:31] will retry after 108.81µs: open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/scheduled-stop-964930/pid: no such file or directory
I0407 12:41:03.602221  907461 retry.go:31] will retry after 165.841µs: open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/scheduled-stop-964930/pid: no such file or directory
I0407 12:41:03.603365  907461 retry.go:31] will retry after 176.151µs: open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/scheduled-stop-964930/pid: no such file or directory
I0407 12:41:03.604503  907461 retry.go:31] will retry after 495.864µs: open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/scheduled-stop-964930/pid: no such file or directory
I0407 12:41:03.605600  907461 retry.go:31] will retry after 478.196µs: open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/scheduled-stop-964930/pid: no such file or directory
I0407 12:41:03.606727  907461 retry.go:31] will retry after 1.013313ms: open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/scheduled-stop-964930/pid: no such file or directory
I0407 12:41:03.607841  907461 retry.go:31] will retry after 581.966µs: open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/scheduled-stop-964930/pid: no such file or directory
I0407 12:41:03.608963  907461 retry.go:31] will retry after 934.351µs: open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/scheduled-stop-964930/pid: no such file or directory
I0407 12:41:03.610102  907461 retry.go:31] will retry after 2.436868ms: open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/scheduled-stop-964930/pid: no such file or directory
I0407 12:41:03.613313  907461 retry.go:31] will retry after 5.653848ms: open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/scheduled-stop-964930/pid: no such file or directory
I0407 12:41:03.619729  907461 retry.go:31] will retry after 4.661697ms: open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/scheduled-stop-964930/pid: no such file or directory
I0407 12:41:03.625201  907461 retry.go:31] will retry after 11.995173ms: open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/scheduled-stop-964930/pid: no such file or directory
I0407 12:41:03.637532  907461 retry.go:31] will retry after 12.74831ms: open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/scheduled-stop-964930/pid: no such file or directory
I0407 12:41:03.650783  907461 retry.go:31] will retry after 28.631441ms: open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/scheduled-stop-964930/pid: no such file or directory
I0407 12:41:03.680005  907461 retry.go:31] will retry after 30.523216ms: open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/scheduled-stop-964930/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-964930 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-964930 -n scheduled-stop-964930
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-964930
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-964930 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-964930
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-964930: exit status 7 (70.910346ms)

                                                
                                                
-- stdout --
	scheduled-stop-964930
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-964930 -n scheduled-stop-964930
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-964930 -n scheduled-stop-964930: exit status 7 (72.539053ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-964930" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-964930
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-964930: (1.680360511s)
--- PASS: TestScheduledStopUnix (104.73s)

                                                
                                    
x
+
TestSkaffold (117.21s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe2495326601 version
skaffold_test.go:63: skaffold version: v2.15.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-259543 --memory=2600 --driver=docker  --container-runtime=docker
E0407 12:42:42.616337  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/addons-184883/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-259543 --memory=2600 --driver=docker  --container-runtime=docker: (29.796573516s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe2495326601 run --minikube-profile skaffold-259543 --kube-context skaffold-259543 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe2495326601 run --minikube-profile skaffold-259543 --kube-context skaffold-259543 --status-check=true --port-forward=false --interactive=false: (1m11.914298349s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-ff95d99c-dr4vz" [0754abe8-2edb-4f05-8191-3543845c3b99] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003208865s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-bbc55cdb4-xctrw" [d66ef232-bcd6-4389-a561-19178a3a9cff] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003079925s
helpers_test.go:175: Cleaning up "skaffold-259543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-259543
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-259543: (2.99906995s)
--- PASS: TestSkaffold (117.21s)

                                                
                                    
x
+
TestInsufficientStorage (11.22s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-860738 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
E0407 12:44:16.922995  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/functional-020915/client.crt: no such file or directory" logger="UnhandledError"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-860738 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (8.884373266s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5097f233-0623-4d26-8d15-4104cfd71432","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-860738] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4bc89597-bd08-4ad1-bf05-686ab588a09d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20602"}}
	{"specversion":"1.0","id":"e7556e6f-888a-4816-8b45-2bb038529535","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"dc729399-23af-4c44-9d28-1f92ba71d973","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20602-902080/kubeconfig"}}
	{"specversion":"1.0","id":"d9762c7d-7136-4b8e-8a01-5c67661bab4e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-902080/.minikube"}}
	{"specversion":"1.0","id":"079b9cfa-d582-4be6-ac2a-ef6c87d7f72e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"d3a48b74-9e4e-4f46-a6c6-5986e70112e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0128570c-8161-4593-b91e-d848f9506bde","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"fa4be619-0f12-425a-9d5c-714e16ce877c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"545b77af-2cc1-47a1-98a1-154ae1165a07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"189f70c5-1e4e-4fc7-aee2-305d42e8b028","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"1378aef8-d524-4fe8-a49e-fb22ff5352dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-860738\" primary control-plane node in \"insufficient-storage-860738\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"49b523af-48ed-4683-a07b-ac521fb8caa1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.46-1743675393-20591 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"1e99ded9-cfd9-43cd-8fa4-25d8e589d9e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"a9c5c7b2-4474-4fad-88d7-0d76d5c575c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-860738 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-860738 --output=json --layout=cluster: exit status 7 (289.002207ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-860738","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-860738","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0407 12:44:22.693600 1133232 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-860738" does not appear in /home/jenkins/minikube-integration/20602-902080/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-860738 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-860738 --output=json --layout=cluster: exit status 7 (291.268977ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-860738","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-860738","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0407 12:44:22.984624 1133293 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-860738" does not appear in /home/jenkins/minikube-integration/20602-902080/kubeconfig
	E0407 12:44:22.995213 1133293 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/insufficient-storage-860738/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-860738" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-860738
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-860738: (1.759428748s)
--- PASS: TestInsufficientStorage (11.22s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (105.86s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1595907063 start -p running-upgrade-779800 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1595907063 start -p running-upgrade-779800 --memory=2200 --vm-driver=docker  --container-runtime=docker: (49.471136039s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-779800 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0407 12:49:40.496248  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/skaffold-259543/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-779800 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (53.100291849s)
helpers_test.go:175: Cleaning up "running-upgrade-779800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-779800
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-779800: (2.577825183s)
--- PASS: TestRunningBinaryUpgrade (105.86s)

                                                
                                    
x
+
TestKubernetesUpgrade (128.49s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-998218 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-998218 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (58.998072894s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-998218
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-998218: (10.876771848s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-998218 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-998218 status --format={{.Host}}: exit status 7 (111.524284ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-998218 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-998218 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (31.267824206s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-998218 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-998218 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-998218 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (98.693405ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-998218] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20602
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20602-902080/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-902080/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-998218
	    minikube start -p kubernetes-upgrade-998218 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9982182 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.2, by running:
	    
	    minikube start -p kubernetes-upgrade-998218 --kubernetes-version=v1.32.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-998218 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-998218 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (23.292340181s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-998218" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-998218
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-998218: (3.743996934s)
--- PASS: TestKubernetesUpgrade (128.49s)

                                                
                                    
x
+
TestMissingContainerUpgrade (172.99s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2461947104 start -p missing-upgrade-139026 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2461947104 start -p missing-upgrade-139026 --memory=2200 --driver=docker  --container-runtime=docker: (1m35.64605528s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-139026
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-139026: (10.507886491s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-139026
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-139026 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0407 12:47:42.616454  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/addons-184883/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-139026 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m3.299275989s)
helpers_test.go:175: Cleaning up "missing-upgrade-139026" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-139026
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-139026: (2.605678216s)
--- PASS: TestMissingContainerUpgrade (172.99s)

                                                
                                    
x
+
TestPause/serial/Start (51s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-923631 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-923631 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (51.001931815s)
--- PASS: TestPause/serial/Start (51.00s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (34.96s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-923631 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0407 12:45:45.684549  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/addons-184883/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-923631 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (34.933120032s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (34.96s)

                                                
                                    
x
+
TestPause/serial/Pause (0.61s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-923631 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.61s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.37s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-923631 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-923631 --output=json --layout=cluster: exit status 2 (369.054952ms)

                                                
                                                
-- stdout --
	{"Name":"pause-923631","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-923631","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.37s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.77s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-923631 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.77s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.89s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-923631 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.89s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.41s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-923631 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-923631 --alsologtostderr -v=5: (2.410704241s)
--- PASS: TestPause/serial/DeletePaused (2.41s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.22s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-923631
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-923631: exit status 1 (33.841175ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-923631: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (95.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1955215584 start -p stopped-upgrade-594211 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1955215584 start -p stopped-upgrade-594211 --memory=2200 --vm-driver=docker  --container-runtime=docker: (47.116815486s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1955215584 -p stopped-upgrade-594211 stop
E0407 12:48:59.506332  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/skaffold-259543/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:48:59.512727  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/skaffold-259543/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:48:59.524075  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/skaffold-259543/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:48:59.545454  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/skaffold-259543/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:48:59.587855  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/skaffold-259543/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:48:59.669274  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/skaffold-259543/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:48:59.830722  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/skaffold-259543/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:49:00.160241  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/skaffold-259543/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:49:00.807632  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/skaffold-259543/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:49:02.088930  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/skaffold-259543/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1955215584 -p stopped-upgrade-594211 stop: (10.911023704s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-594211 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0407 12:49:04.650245  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/skaffold-259543/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:49:09.772135  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/skaffold-259543/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:49:16.921854  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/functional-020915/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:49:20.014569  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/skaffold-259543/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-594211 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (37.72814964s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (95.76s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-594211
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-594211: (1.990389161s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-383245 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-383245 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (143.381889ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-383245] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20602
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20602-902080/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-902080/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-383245 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-383245 --driver=docker  --container-runtime=docker: (41.879912732s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-383245 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-383245 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-383245 --no-kubernetes --driver=docker  --container-runtime=docker: (17.230203697s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-383245 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-383245 status -o json: exit status 2 (394.02752ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-383245","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-383245
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-383245: (1.914242321s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-383245 --no-kubernetes --driver=docker  --container-runtime=docker
E0407 12:51:43.379508  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/skaffold-259543/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-383245 --no-kubernetes --driver=docker  --container-runtime=docker: (8.711314048s)
--- PASS: TestNoKubernetes/serial/Start (8.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-383245 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-383245 "sudo systemctl is-active --quiet service kubelet": exit status 1 (360.015927ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-383245
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-383245: (1.276972645s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-383245 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-383245 --driver=docker  --container-runtime=docker: (8.673987206s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-383245 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-383245 "sudo systemctl is-active --quiet service kubelet": exit status 1 (312.749031ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (147.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-907855 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0407 12:53:59.506936  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/skaffold-259543/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:54:16.922035  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/functional-020915/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:54:27.221102  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/skaffold-259543/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-907855 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m27.455905418s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (147.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (52.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-302149 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-302149 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2: (52.752106s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (52.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-907855 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ac43966d-bb26-470b-937f-974874dbd9fe] Pending
helpers_test.go:344: "busybox" [ac43966d-bb26-470b-937f-974874dbd9fe] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ac43966d-bb26-470b-937f-974874dbd9fe] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.01379241s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-907855 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-907855 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-907855 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.536977229s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-907855 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-907855 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-907855 --alsologtostderr -v=3: (11.497394988s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-907855 -n old-k8s-version-907855
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-907855 -n old-k8s-version-907855: exit status 7 (94.694545ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-907855 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-302149 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e5e05c77-2c87-46d3-8d12-5e3d8cb1f6d2] Pending
helpers_test.go:344: "busybox" [e5e05c77-2c87-46d3-8d12-5e3d8cb1f6d2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e5e05c77-2c87-46d3-8d12-5e3d8cb1f6d2] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004341274s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-302149 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-302149 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-302149 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.615988948s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-302149 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-302149 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-302149 --alsologtostderr -v=3: (11.23125736s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-302149 -n no-preload-302149
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-302149 -n no-preload-302149: exit status 7 (78.258123ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-302149 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (269.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-302149 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2
E0407 12:57:42.616733  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/addons-184883/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:58:59.506654  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/skaffold-259543/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:59:16.922797  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/functional-020915/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-302149 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2: (4m29.539946152s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-302149 -n no-preload-302149
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (269.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-5k8jh" [0e299c20-ed02-44c3-88c9-3dd6e5ec9708] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.008081429s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-5k8jh" [0e299c20-ed02-44c3-88c9-3dd6e5ec9708] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003259394s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-302149 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-302149 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-302149 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-302149 -n no-preload-302149
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-302149 -n no-preload-302149: exit status 2 (331.128651ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-302149 -n no-preload-302149
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-302149 -n no-preload-302149: exit status 2 (333.474165ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-302149 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-302149 -n no-preload-302149
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-302149 -n no-preload-302149
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (47.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-717935 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-717935 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2: (47.465270778s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (47.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-717935 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8b7861b1-cb30-4db1-81b2-7d72572eb370] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8b7861b1-cb30-4db1-81b2-7d72572eb370] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003783951s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-717935 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-dvssg" [8d67f783-1b31-448f-80be-f951e189986f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.011898148s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-717935 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-717935 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-dvssg" [8d67f783-1b31-448f-80be-f951e189986f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004254063s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-907855 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-717935 --alsologtostderr -v=3
E0407 13:02:25.686407  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/addons-184883/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-717935 --alsologtostderr -v=3: (11.04262239s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-907855 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-907855 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-907855 -n old-k8s-version-907855
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-907855 -n old-k8s-version-907855: exit status 2 (305.983024ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-907855 -n old-k8s-version-907855
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-907855 -n old-k8s-version-907855: exit status 2 (320.481998ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-907855 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-907855 -n old-k8s-version-907855
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-907855 -n old-k8s-version-907855
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (48.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-757345 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-757345 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2: (48.755212955s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (48.76s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-717935 -n embed-certs-717935
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-717935 -n embed-certs-717935: exit status 7 (127.697585ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-717935 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (270.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-717935 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2
E0407 13:02:42.616497  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/addons-184883/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-717935 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2: (4m30.46814107s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-717935 -n embed-certs-717935
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (270.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-757345 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ce7733f8-b875-41f7-b738-8e6d31dc7b1c] Pending
helpers_test.go:344: "busybox" [ce7733f8-b875-41f7-b738-8e6d31dc7b1c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ce7733f8-b875-41f7-b738-8e6d31dc7b1c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003330217s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-757345 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-757345 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-757345 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-757345 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-757345 --alsologtostderr -v=3: (11.056869169s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-757345 -n default-k8s-diff-port-757345
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-757345 -n default-k8s-diff-port-757345: exit status 7 (79.636256ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-757345 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (302.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-757345 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2
E0407 13:03:59.507298  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/skaffold-259543/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:04:16.922363  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/functional-020915/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:05:22.582508  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/skaffold-259543/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:05:41.012230  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/old-k8s-version-907855/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:05:41.018714  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/old-k8s-version-907855/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:05:41.030109  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/old-k8s-version-907855/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:05:41.051543  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/old-k8s-version-907855/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:05:41.092943  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/old-k8s-version-907855/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:05:41.175107  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/old-k8s-version-907855/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:05:41.336559  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/old-k8s-version-907855/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:05:41.658057  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/old-k8s-version-907855/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:05:42.299452  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/old-k8s-version-907855/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:05:43.581698  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/old-k8s-version-907855/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:05:46.143545  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/old-k8s-version-907855/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:05:51.264991  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/old-k8s-version-907855/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:06:01.506953  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/old-k8s-version-907855/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:06:16.890143  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/no-preload-302149/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:06:16.896553  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/no-preload-302149/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:06:16.907975  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/no-preload-302149/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:06:16.929567  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/no-preload-302149/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:06:16.971148  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/no-preload-302149/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:06:17.052655  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/no-preload-302149/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:06:17.214162  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/no-preload-302149/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:06:17.536089  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/no-preload-302149/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:06:18.178066  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/no-preload-302149/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:06:19.459669  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/no-preload-302149/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:06:21.988612  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/old-k8s-version-907855/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:06:22.021022  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/no-preload-302149/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:06:27.142856  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/no-preload-302149/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:06:37.385035  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/no-preload-302149/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:06:57.866319  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/no-preload-302149/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:07:02.950435  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/old-k8s-version-907855/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-757345 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2: (5m2.282968677s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-757345 -n default-k8s-diff-port-757345
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (302.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-z4vrw" [0982f9f2-20da-4146-9f38-76447e5aafd9] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003198053s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-z4vrw" [0982f9f2-20da-4146-9f38-76447e5aafd9] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003231163s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-717935 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-717935 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-717935 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-717935 -n embed-certs-717935
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-717935 -n embed-certs-717935: exit status 2 (337.848504ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-717935 -n embed-certs-717935
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-717935 -n embed-certs-717935: exit status 2 (336.975279ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-717935 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-717935 -n embed-certs-717935
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-717935 -n embed-certs-717935
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (36.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-251315 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2
E0407 13:07:38.827725  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/no-preload-302149/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:07:42.616929  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/addons-184883/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-251315 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2: (36.128189964s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (36.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-251315 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-251315 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.198166171s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-251315 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-251315 --alsologtostderr -v=3: (11.231963758s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-251315 -n newest-cni-251315
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-251315 -n newest-cni-251315: exit status 7 (76.308388ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-251315 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (18.82s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-251315 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2
E0407 13:08:24.872388  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/old-k8s-version-907855/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-251315 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2: (18.283580107s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-251315 -n newest-cni-251315
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (18.82s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-251315 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-251315 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-251315 -n newest-cni-251315
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-251315 -n newest-cni-251315: exit status 2 (398.254008ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-251315 -n newest-cni-251315
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-251315 -n newest-cni-251315: exit status 2 (398.530163ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-251315 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-251315 -n newest-cni-251315
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-251315 -n newest-cni-251315
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (79.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-569293 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-569293 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m19.522186814s)
--- PASS: TestNetworkPlugins/group/auto/Start (79.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-r62b2" [0c88e3e1-a7b4-49fa-bdfc-60020dde3706] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004544666s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-r62b2" [0c88e3e1-a7b4-49fa-bdfc-60020dde3706] Running
E0407 13:08:59.507074  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/skaffold-259543/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:08:59.988313  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/functional-020915/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00373734s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-757345 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-757345 image list --format=json
E0407 13:09:00.749017  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/no-preload-302149/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-757345 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-757345 -n default-k8s-diff-port-757345
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-757345 -n default-k8s-diff-port-757345: exit status 2 (428.058154ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-757345 -n default-k8s-diff-port-757345
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-757345 -n default-k8s-diff-port-757345: exit status 2 (338.376641ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-757345 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-757345 -n default-k8s-diff-port-757345
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-757345 -n default-k8s-diff-port-757345
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.99s)
E0407 13:16:41.215462  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/kindnet-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:17:00.275781  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/calico-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:17:00.282403  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/calico-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:17:00.293945  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/calico-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:17:00.316142  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/calico-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:17:00.357708  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/calico-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:17:00.439306  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/calico-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:17:00.600940  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/calico-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:17:00.922491  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/calico-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:17:01.564876  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/calico-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:17:02.846571  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/calico-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:17:05.408464  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/calico-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:17:08.802545  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/custom-flannel-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:17:08.809010  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/custom-flannel-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:17:08.820341  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/custom-flannel-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:17:08.841694  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/custom-flannel-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:17:08.883171  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/custom-flannel-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:17:08.964527  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/custom-flannel-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:17:09.126011  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/custom-flannel-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:17:09.447876  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/custom-flannel-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:17:10.089956  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/custom-flannel-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:17:10.529819  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/calico-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:17:11.371984  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/custom-flannel-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:17:13.933368  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/custom-flannel-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:17:19.055748  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/custom-flannel-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:17:20.771383  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/calico-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:17:29.297615  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/custom-flannel-569293/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (71.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-569293 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
E0407 13:09:16.922836  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/functional-020915/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-569293 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m11.135995379s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (71.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-569293 "pgrep -a kubelet"
I0407 13:09:58.507886  907461 config.go:182] Loaded profile config "auto-569293": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-569293 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-b5gdk" [44404429-b98e-4772-9685-a0c833d162ba] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-b5gdk" [44404429-b98e-4772-9685-a0c833d162ba] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.003427913s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-569293 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-569293 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-569293 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-lnl9t" [64d7fadc-3c83-42b4-82b8-27d025fe153d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004311103s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-569293 "pgrep -a kubelet"
I0407 13:10:25.635945  907461 config.go:182] Loaded profile config "kindnet-569293": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-569293 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-t2ksk" [c080ab27-08c5-46b1-ab4a-d4653ed3dd7f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-t2ksk" [c080ab27-08c5-46b1-ab4a-d4653ed3dd7f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003776751s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (86.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-569293 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-569293 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m26.614251493s)
--- PASS: TestNetworkPlugins/group/calico/Start (86.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-569293 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-569293 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-569293 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (65.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-569293 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E0407 13:11:08.714321  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/old-k8s-version-907855/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:11:16.890387  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/no-preload-302149/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:11:44.590859  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/no-preload-302149/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-569293 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m5.25100728s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (65.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-rpfp5" [4758d38e-4055-4d15-89d1-30a7b18639e1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003491557s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-569293 "pgrep -a kubelet"
I0407 13:12:06.597215  907461 config.go:182] Loaded profile config "calico-569293": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-569293 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-vhbk4" [9a65c121-ebb9-4cf6-8ece-60175ae2bb73] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-vhbk4" [9a65c121-ebb9-4cf6-8ece-60175ae2bb73] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.003489664s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-569293 "pgrep -a kubelet"
I0407 13:12:08.441382  907461 config.go:182] Loaded profile config "custom-flannel-569293": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-569293 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-4jlv9" [0b1cf422-b95f-4a2b-a95d-12ddd5036e1d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-4jlv9" [0b1cf422-b95f-4a2b-a95d-12ddd5036e1d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004293207s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-569293 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-569293 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-569293 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-569293 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-569293 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-569293 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (80.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-569293 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-569293 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m20.476693282s)
--- PASS: TestNetworkPlugins/group/false/Start (80.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (89.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-569293 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E0407 13:13:25.170044  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/default-k8s-diff-port-757345/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:13:25.176376  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/default-k8s-diff-port-757345/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:13:25.187745  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/default-k8s-diff-port-757345/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:13:25.209064  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/default-k8s-diff-port-757345/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:13:25.250397  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/default-k8s-diff-port-757345/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:13:25.331696  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/default-k8s-diff-port-757345/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:13:25.493014  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/default-k8s-diff-port-757345/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:13:25.814995  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/default-k8s-diff-port-757345/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:13:26.456505  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/default-k8s-diff-port-757345/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:13:27.738267  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/default-k8s-diff-port-757345/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:13:30.300475  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/default-k8s-diff-port-757345/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:13:35.421802  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/default-k8s-diff-port-757345/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:13:45.663108  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/default-k8s-diff-port-757345/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:13:59.506351  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/skaffold-259543/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:14:06.145028  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/default-k8s-diff-port-757345/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-569293 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m29.27556495s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (89.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-569293 "pgrep -a kubelet"
I0407 13:14:08.289921  907461 config.go:182] Loaded profile config "false-569293": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-569293 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-cncgf" [0808c539-49d5-4173-b4ca-3d80930980dc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-cncgf" [0808c539-49d5-4173-b4ca-3d80930980dc] Running
E0407 13:14:16.922003  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/functional-020915/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.003529804s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-569293 "pgrep -a kubelet"
I0407 13:14:17.308261  907461 config.go:182] Loaded profile config "enable-default-cni-569293": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-569293 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-2dpph" [1f785b24-1700-43be-a511-38e242994ebd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-2dpph" [1f785b24-1700-43be-a511-38e242994ebd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.006532987s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-569293 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-569293 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-569293 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-569293 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-569293 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-569293 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (64.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-569293 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
E0407 13:14:47.107253  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/default-k8s-diff-port-757345/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-569293 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m4.887081658s)
--- PASS: TestNetworkPlugins/group/flannel/Start (64.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (49.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-569293 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0407 13:14:58.881056  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/auto-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:14:58.888826  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/auto-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:14:58.900206  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/auto-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:14:58.921828  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/auto-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:14:58.964095  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/auto-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:14:59.045551  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/auto-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:14:59.207521  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/auto-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:14:59.529549  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/auto-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:15:00.171230  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/auto-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:15:01.452816  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/auto-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:15:04.014738  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/auto-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:15:09.136800  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/auto-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:15:19.275864  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/kindnet-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:15:19.282182  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/kindnet-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:15:19.293529  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/kindnet-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:15:19.314848  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/kindnet-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:15:19.356213  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/kindnet-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:15:19.378568  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/auto-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:15:19.437960  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/kindnet-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:15:19.599551  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/kindnet-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:15:19.920922  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/kindnet-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:15:20.562228  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/kindnet-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:15:21.843507  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/kindnet-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:15:24.405096  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/kindnet-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:15:29.527299  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/kindnet-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:15:39.768931  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/kindnet-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:15:39.860416  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/auto-569293/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:15:41.011847  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/old-k8s-version-907855/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-569293 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (49.594727663s)
--- PASS: TestNetworkPlugins/group/bridge/Start (49.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-569293 "pgrep -a kubelet"
I0407 13:15:43.877607  907461 config.go:182] Loaded profile config "bridge-569293": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-569293 replace --force -f testdata/netcat-deployment.yaml
I0407 13:15:44.321662  907461 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-z6j8f" [a7fe66bd-4b34-4843-9e9f-186abc8c0010] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-z6j8f" [a7fe66bd-4b34-4843-9e9f-186abc8c0010] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003276313s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-xq66x" [bfdda9f0-e39b-4a74-a87c-f963b1861c57] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003265958s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-569293 "pgrep -a kubelet"
I0407 13:15:53.726503  907461 config.go:182] Loaded profile config "flannel-569293": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-569293 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-g7g6g" [03921e90-7945-413e-9ec6-fac1c0d442b0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-g7g6g" [03921e90-7945-413e-9ec6-fac1c0d442b0] Running
E0407 13:16:00.253446  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/kindnet-569293/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.007354846s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-569293 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-569293 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-569293 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-569293 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-569293 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-569293 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (73.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-569293 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0407 13:16:20.822201  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/auto-569293/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-569293 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m13.200541718s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (73.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-569293 "pgrep -a kubelet"
I0407 13:17:32.639394  907461 config.go:182] Loaded profile config "kubenet-569293": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-569293 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-4mkjt" [fcc7abc9-e0d0-44dd-ad94-4793b7c5d0ce] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-4mkjt" [fcc7abc9-e0d0-44dd-ad94-4793b7c5d0ce] Running
E0407 13:17:41.253165  907461 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-902080/.minikube/profiles/calico-569293/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 9.003292598s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-569293 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-569293 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-569293 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.18s)

                                                
                                    

Test skip (26/346)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.67s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-953697 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-953697" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-953697
--- SKIP: TestDownloadOnlyKic (0.67s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1804: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-923817" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-923817
--- SKIP: TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:631: 
----------------------- debugLogs start: cilium-569293 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-569293

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-569293

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-569293

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-569293

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-569293

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-569293

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-569293

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-569293

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-569293

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-569293

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-569293" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-569293"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-569293" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-569293"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-569293" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-569293"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-569293

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-569293" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-569293"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-569293" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-569293"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-569293" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-569293" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-569293" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-569293" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-569293" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-569293" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-569293" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-569293" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-569293" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-569293"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-569293" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-569293"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-569293" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-569293"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-569293" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-569293"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-569293" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-569293"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-569293

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-569293

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-569293" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-569293" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-569293

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-569293

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-569293" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-569293" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-569293" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-569293" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-569293" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-569293" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-569293"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-569293" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-569293"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-569293" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-569293"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-569293" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-569293"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-569293" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-569293"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-569293

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-569293" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-569293"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-569293" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-569293"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-569293" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-569293"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-569293" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-569293"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-569293" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-569293"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-569293" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-569293"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-569293" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-569293"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-569293" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-569293"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-569293" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-569293"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-569293" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-569293"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-569293" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-569293"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-569293" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-569293"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-569293" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-569293"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-569293" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-569293"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-569293" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-569293"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-569293" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-569293"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-569293" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-569293"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-569293" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-569293"

                                                
                                                
----------------------- debugLogs end: cilium-569293 [took: 5.730230907s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-569293" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-569293
--- SKIP: TestNetworkPlugins/group/cilium (5.94s)

                                                
                                    
Copied to clipboard