Test Report: Docker_Linux_docker_arm64 20598

                    
                      63c1754226199ce281e4ac8e931674d5ef457043:2025-04-07:39038
                    
                

Test fail (1/346)

Order failed test Duration
313 TestStartStop/group/old-k8s-version/serial/SecondStart 377.84
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (377.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-169187 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-169187 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: exit status 102 (6m14.061117762s)

                                                
                                                
-- stdout --
	* [old-k8s-version-169187] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20598-1489638/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-1489638/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-169187" primary control-plane node in "old-k8s-version-169187" cluster
	* Pulling base image v0.0.46-1743675393-20591 ...
	* Restarting existing docker container for "old-k8s-version-169187" ...
	* Preparing Kubernetes v1.20.0 on Docker 28.0.4 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-169187 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 13:45:37.341891 1819972 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:45:37.342134 1819972 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:45:37.342202 1819972 out.go:358] Setting ErrFile to fd 2...
	I0407 13:45:37.342222 1819972 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:45:37.342525 1819972 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-1489638/.minikube/bin
	I0407 13:45:37.342955 1819972 out.go:352] Setting JSON to false
	I0407 13:45:37.344071 1819972 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":26886,"bootTime":1744006652,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0407 13:45:37.344158 1819972 start.go:139] virtualization:  
	I0407 13:45:37.347529 1819972 out.go:177] * [old-k8s-version-169187] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0407 13:45:37.351311 1819972 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 13:45:37.351366 1819972 notify.go:220] Checking for updates...
	I0407 13:45:37.357340 1819972 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 13:45:37.360277 1819972 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20598-1489638/kubeconfig
	I0407 13:45:37.363074 1819972 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-1489638/.minikube
	I0407 13:45:37.365875 1819972 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0407 13:45:37.368731 1819972 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 13:45:37.372121 1819972 config.go:182] Loaded profile config "old-k8s-version-169187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0407 13:45:37.375464 1819972 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	I0407 13:45:37.378238 1819972 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 13:45:37.435726 1819972 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0407 13:45:37.435855 1819972 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 13:45:37.543723 1819972 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:53 SystemTime:2025-04-07 13:45:37.528636748 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0407 13:45:37.543840 1819972 docker.go:318] overlay module found
	I0407 13:45:37.546924 1819972 out.go:177] * Using the docker driver based on existing profile
	I0407 13:45:37.549702 1819972 start.go:297] selected driver: docker
	I0407 13:45:37.549731 1819972 start.go:901] validating driver "docker" against &{Name:old-k8s-version-169187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-169187 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:45:37.549838 1819972 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 13:45:37.550521 1819972 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 13:45:37.660991 1819972 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:53 SystemTime:2025-04-07 13:45:37.650479302 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0407 13:45:37.661332 1819972 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 13:45:37.661370 1819972 cni.go:84] Creating CNI manager for ""
	I0407 13:45:37.661430 1819972 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0407 13:45:37.661472 1819972 start.go:340] cluster config:
	{Name:old-k8s-version-169187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-169187 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:45:37.664743 1819972 out.go:177] * Starting "old-k8s-version-169187" primary control-plane node in "old-k8s-version-169187" cluster
	I0407 13:45:37.667576 1819972 cache.go:121] Beginning downloading kic base image for docker with docker
	I0407 13:45:37.670620 1819972 out.go:177] * Pulling base image v0.0.46-1743675393-20591 ...
	I0407 13:45:37.673342 1819972 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0407 13:45:37.673406 1819972 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20598-1489638/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0407 13:45:37.673422 1819972 cache.go:56] Caching tarball of preloaded images
	I0407 13:45:37.673511 1819972 preload.go:172] Found /home/jenkins/minikube-integration/20598-1489638/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0407 13:45:37.673527 1819972 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0407 13:45:37.673645 1819972 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/old-k8s-version-169187/config.json ...
	I0407 13:45:37.673868 1819972 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local docker daemon
	I0407 13:45:37.706656 1819972 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local docker daemon, skipping pull
	I0407 13:45:37.706683 1819972 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 exists in daemon, skipping load
	I0407 13:45:37.706697 1819972 cache.go:230] Successfully downloaded all kic artifacts
	I0407 13:45:37.706720 1819972 start.go:360] acquireMachinesLock for old-k8s-version-169187: {Name:mkeb44ab1d4b31711db3c3abb0770c2a53c1d6ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 13:45:37.706779 1819972 start.go:364] duration metric: took 36.71µs to acquireMachinesLock for "old-k8s-version-169187"
	I0407 13:45:37.706808 1819972 start.go:96] Skipping create...Using existing machine configuration
	I0407 13:45:37.706814 1819972 fix.go:54] fixHost starting: 
	I0407 13:45:37.707101 1819972 cli_runner.go:164] Run: docker container inspect old-k8s-version-169187 --format={{.State.Status}}
	I0407 13:45:37.745699 1819972 fix.go:112] recreateIfNeeded on old-k8s-version-169187: state=Stopped err=<nil>
	W0407 13:45:37.745740 1819972 fix.go:138] unexpected machine state, will restart: <nil>
	I0407 13:45:37.748821 1819972 out.go:177] * Restarting existing docker container for "old-k8s-version-169187" ...
	I0407 13:45:37.751619 1819972 cli_runner.go:164] Run: docker start old-k8s-version-169187
	I0407 13:45:38.192838 1819972 cli_runner.go:164] Run: docker container inspect old-k8s-version-169187 --format={{.State.Status}}
	I0407 13:45:38.223808 1819972 kic.go:430] container "old-k8s-version-169187" state is running.
	I0407 13:45:38.224232 1819972 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-169187
	I0407 13:45:38.256849 1819972 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/old-k8s-version-169187/config.json ...
	I0407 13:45:38.257096 1819972 machine.go:93] provisionDockerMachine start ...
	I0407 13:45:38.257166 1819972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169187
	I0407 13:45:38.278574 1819972 main.go:141] libmachine: Using SSH client type: native
	I0407 13:45:38.278917 1819972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 34611 <nil> <nil>}
	I0407 13:45:38.278927 1819972 main.go:141] libmachine: About to run SSH command:
	hostname
	I0407 13:45:38.279615 1819972 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0407 13:45:41.403824 1819972 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-169187
	
	I0407 13:45:41.403855 1819972 ubuntu.go:169] provisioning hostname "old-k8s-version-169187"
	I0407 13:45:41.403919 1819972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169187
	I0407 13:45:41.421097 1819972 main.go:141] libmachine: Using SSH client type: native
	I0407 13:45:41.421407 1819972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 34611 <nil> <nil>}
	I0407 13:45:41.421422 1819972 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-169187 && echo "old-k8s-version-169187" | sudo tee /etc/hostname
	I0407 13:45:41.564008 1819972 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-169187
	
	I0407 13:45:41.564189 1819972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169187
	I0407 13:45:41.584254 1819972 main.go:141] libmachine: Using SSH client type: native
	I0407 13:45:41.584568 1819972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 34611 <nil> <nil>}
	I0407 13:45:41.584586 1819972 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-169187' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-169187/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-169187' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 13:45:41.727962 1819972 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 13:45:41.728037 1819972 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20598-1489638/.minikube CaCertPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20598-1489638/.minikube}
	I0407 13:45:41.728078 1819972 ubuntu.go:177] setting up certificates
	I0407 13:45:41.728114 1819972 provision.go:84] configureAuth start
	I0407 13:45:41.728205 1819972 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-169187
	I0407 13:45:41.756817 1819972 provision.go:143] copyHostCerts
	I0407 13:45:41.756882 1819972 exec_runner.go:144] found /home/jenkins/minikube-integration/20598-1489638/.minikube/ca.pem, removing ...
	I0407 13:45:41.756898 1819972 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20598-1489638/.minikube/ca.pem
	I0407 13:45:41.756979 1819972 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1489638/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20598-1489638/.minikube/ca.pem (1082 bytes)
	I0407 13:45:41.757069 1819972 exec_runner.go:144] found /home/jenkins/minikube-integration/20598-1489638/.minikube/cert.pem, removing ...
	I0407 13:45:41.757074 1819972 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20598-1489638/.minikube/cert.pem
	I0407 13:45:41.757099 1819972 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1489638/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20598-1489638/.minikube/cert.pem (1123 bytes)
	I0407 13:45:41.757146 1819972 exec_runner.go:144] found /home/jenkins/minikube-integration/20598-1489638/.minikube/key.pem, removing ...
	I0407 13:45:41.757150 1819972 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20598-1489638/.minikube/key.pem
	I0407 13:45:41.757172 1819972 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1489638/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20598-1489638/.minikube/key.pem (1675 bytes)
	I0407 13:45:41.757214 1819972 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20598-1489638/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20598-1489638/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20598-1489638/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-169187 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-169187]
	I0407 13:45:42.201410 1819972 provision.go:177] copyRemoteCerts
	I0407 13:45:42.201683 1819972 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 13:45:42.201764 1819972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169187
	I0407 13:45:42.229784 1819972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34611 SSHKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/old-k8s-version-169187/id_rsa Username:docker}
	I0407 13:45:42.334997 1819972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-1489638/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0407 13:45:42.367752 1819972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-1489638/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0407 13:45:42.395789 1819972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-1489638/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0407 13:45:42.423690 1819972 provision.go:87] duration metric: took 695.547232ms to configureAuth
	I0407 13:45:42.423759 1819972 ubuntu.go:193] setting minikube options for container-runtime
	I0407 13:45:42.423994 1819972 config.go:182] Loaded profile config "old-k8s-version-169187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0407 13:45:42.424088 1819972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169187
	I0407 13:45:42.445342 1819972 main.go:141] libmachine: Using SSH client type: native
	I0407 13:45:42.445662 1819972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 34611 <nil> <nil>}
	I0407 13:45:42.445674 1819972 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0407 13:45:42.572291 1819972 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0407 13:45:42.572316 1819972 ubuntu.go:71] root file system type: overlay
	I0407 13:45:42.572423 1819972 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0407 13:45:42.572496 1819972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169187
	I0407 13:45:42.595973 1819972 main.go:141] libmachine: Using SSH client type: native
	I0407 13:45:42.596283 1819972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 34611 <nil> <nil>}
	I0407 13:45:42.596372 1819972 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0407 13:45:42.743154 1819972 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0407 13:45:42.743260 1819972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169187
	I0407 13:45:42.769069 1819972 main.go:141] libmachine: Using SSH client type: native
	I0407 13:45:42.769383 1819972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 34611 <nil> <nil>}
	I0407 13:45:42.769400 1819972 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0407 13:45:42.914314 1819972 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 13:45:42.914356 1819972 machine.go:96] duration metric: took 4.657241722s to provisionDockerMachine
	I0407 13:45:42.914369 1819972 start.go:293] postStartSetup for "old-k8s-version-169187" (driver="docker")
	I0407 13:45:42.914380 1819972 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 13:45:42.914458 1819972 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 13:45:42.914518 1819972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169187
	I0407 13:45:42.945894 1819972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34611 SSHKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/old-k8s-version-169187/id_rsa Username:docker}
	I0407 13:45:43.045241 1819972 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 13:45:43.049083 1819972 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0407 13:45:43.049189 1819972 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0407 13:45:43.049257 1819972 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0407 13:45:43.049284 1819972 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0407 13:45:43.049307 1819972 filesync.go:126] Scanning /home/jenkins/minikube-integration/20598-1489638/.minikube/addons for local assets ...
	I0407 13:45:43.049393 1819972 filesync.go:126] Scanning /home/jenkins/minikube-integration/20598-1489638/.minikube/files for local assets ...
	I0407 13:45:43.049529 1819972 filesync.go:149] local asset: /home/jenkins/minikube-integration/20598-1489638/.minikube/files/etc/ssl/certs/14950262.pem -> 14950262.pem in /etc/ssl/certs
	I0407 13:45:43.049774 1819972 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0407 13:45:43.060722 1819972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-1489638/.minikube/files/etc/ssl/certs/14950262.pem --> /etc/ssl/certs/14950262.pem (1708 bytes)
	I0407 13:45:43.090976 1819972 start.go:296] duration metric: took 176.590145ms for postStartSetup
	I0407 13:45:43.091062 1819972 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 13:45:43.091108 1819972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169187
	I0407 13:45:43.110246 1819972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34611 SSHKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/old-k8s-version-169187/id_rsa Username:docker}
	I0407 13:45:43.199348 1819972 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0407 13:45:43.204977 1819972 fix.go:56] duration metric: took 5.49815662s for fixHost
	I0407 13:45:43.205004 1819972 start.go:83] releasing machines lock for "old-k8s-version-169187", held for 5.498207172s
	I0407 13:45:43.205075 1819972 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-169187
	I0407 13:45:43.223176 1819972 ssh_runner.go:195] Run: cat /version.json
	I0407 13:45:43.223235 1819972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169187
	I0407 13:45:43.223478 1819972 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0407 13:45:43.223554 1819972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169187
	I0407 13:45:43.275021 1819972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34611 SSHKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/old-k8s-version-169187/id_rsa Username:docker}
	I0407 13:45:43.276078 1819972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34611 SSHKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/old-k8s-version-169187/id_rsa Username:docker}
	I0407 13:45:43.516700 1819972 ssh_runner.go:195] Run: systemctl --version
	I0407 13:45:43.522855 1819972 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0407 13:45:43.529739 1819972 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0407 13:45:43.562961 1819972 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0407 13:45:43.563102 1819972 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0407 13:45:43.591807 1819972 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0407 13:45:43.623565 1819972 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0407 13:45:43.623692 1819972 start.go:495] detecting cgroup driver to use...
	I0407 13:45:43.623763 1819972 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0407 13:45:43.623961 1819972 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 13:45:43.651127 1819972 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0407 13:45:43.663772 1819972 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0407 13:45:43.682240 1819972 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0407 13:45:43.682397 1819972 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0407 13:45:43.694433 1819972 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0407 13:45:43.716820 1819972 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0407 13:45:43.729350 1819972 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0407 13:45:43.745991 1819972 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 13:45:43.761204 1819972 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0407 13:45:43.777047 1819972 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 13:45:43.788562 1819972 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 13:45:43.802464 1819972 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:45:43.927726 1819972 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0407 13:45:44.071278 1819972 start.go:495] detecting cgroup driver to use...
	I0407 13:45:44.071414 1819972 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0407 13:45:44.071567 1819972 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0407 13:45:44.100118 1819972 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0407 13:45:44.100239 1819972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0407 13:45:44.117455 1819972 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 13:45:44.144889 1819972 ssh_runner.go:195] Run: which cri-dockerd
	I0407 13:45:44.150734 1819972 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0407 13:45:44.162526 1819972 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0407 13:45:44.185684 1819972 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0407 13:45:44.348770 1819972 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0407 13:45:44.500284 1819972 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0407 13:45:44.500377 1819972 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0407 13:45:44.530688 1819972 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:45:44.684157 1819972 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0407 13:45:45.490323 1819972 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0407 13:45:45.514741 1819972 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0407 13:45:45.556276 1819972 out.go:235] * Preparing Kubernetes v1.20.0 on Docker 28.0.4 ...
	I0407 13:45:45.556432 1819972 cli_runner.go:164] Run: docker network inspect old-k8s-version-169187 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0407 13:45:45.581423 1819972 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0407 13:45:45.585657 1819972 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:45:45.596771 1819972 kubeadm.go:883] updating cluster {Name:old-k8s-version-169187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-169187 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkin
s:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0407 13:45:45.596881 1819972 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0407 13:45:45.596948 1819972 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0407 13:45:45.621465 1819972 docker.go:689] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-proxy:v1.20.0
	registry.k8s.io/kube-proxy:v1.20.0
	k8s.gcr.io/kube-controller-manager:v1.20.0
	registry.k8s.io/kube-controller-manager:v1.20.0
	k8s.gcr.io/kube-apiserver:v1.20.0
	registry.k8s.io/kube-apiserver:v1.20.0
	k8s.gcr.io/kube-scheduler:v1.20.0
	registry.k8s.io/kube-scheduler:v1.20.0
	k8s.gcr.io/etcd:3.4.13-0
	registry.k8s.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	registry.k8s.io/coredns:1.7.0
	k8s.gcr.io/pause:3.2
	registry.k8s.io/pause:3.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0407 13:45:45.621485 1819972 docker.go:619] Images already preloaded, skipping extraction
	I0407 13:45:45.621544 1819972 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0407 13:45:45.648322 1819972 docker.go:689] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/kube-proxy:v1.20.0
	k8s.gcr.io/kube-proxy:v1.20.0
	registry.k8s.io/kube-apiserver:v1.20.0
	k8s.gcr.io/kube-apiserver:v1.20.0
	k8s.gcr.io/kube-controller-manager:v1.20.0
	registry.k8s.io/kube-controller-manager:v1.20.0
	k8s.gcr.io/kube-scheduler:v1.20.0
	registry.k8s.io/kube-scheduler:v1.20.0
	k8s.gcr.io/etcd:3.4.13-0
	registry.k8s.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	registry.k8s.io/coredns:1.7.0
	k8s.gcr.io/pause:3.2
	registry.k8s.io/pause:3.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0407 13:45:45.648394 1819972 cache_images.go:84] Images are preloaded, skipping loading
	I0407 13:45:45.648418 1819972 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 docker true true} ...
	I0407 13:45:45.648540 1819972 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-169187 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-169187 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0407 13:45:45.648630 1819972 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0407 13:45:45.703092 1819972 cni.go:84] Creating CNI manager for ""
	I0407 13:45:45.703116 1819972 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0407 13:45:45.703125 1819972 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0407 13:45:45.703143 1819972 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-169187 NodeName:old-k8s-version-169187 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0407 13:45:45.703274 1819972 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-169187"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0407 13:45:45.703337 1819972 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0407 13:45:45.712098 1819972 binaries.go:44] Found k8s binaries, skipping transfer
	I0407 13:45:45.712209 1819972 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0407 13:45:45.723419 1819972 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0407 13:45:45.742689 1819972 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 13:45:45.760834 1819972 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2118 bytes)
	I0407 13:45:45.778556 1819972 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0407 13:45:45.781915 1819972 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:45:45.793442 1819972 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:45:45.892660 1819972 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 13:45:45.907271 1819972 certs.go:68] Setting up /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/old-k8s-version-169187 for IP: 192.168.76.2
	I0407 13:45:45.907288 1819972 certs.go:194] generating shared ca certs ...
	I0407 13:45:45.907304 1819972 certs.go:226] acquiring lock for ca certs: {Name:mk03ca927c02de3344f72431a7d9f1cc9d84ee54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:45:45.907437 1819972 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20598-1489638/.minikube/ca.key
	I0407 13:45:45.907475 1819972 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20598-1489638/.minikube/proxy-client-ca.key
	I0407 13:45:45.907482 1819972 certs.go:256] generating profile certs ...
	I0407 13:45:45.907578 1819972 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/old-k8s-version-169187/client.key
	I0407 13:45:45.907643 1819972 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/old-k8s-version-169187/apiserver.key.b87325ea
	I0407 13:45:45.907683 1819972 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/old-k8s-version-169187/proxy-client.key
	I0407 13:45:45.907793 1819972 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-1489638/.minikube/certs/1495026.pem (1338 bytes)
	W0407 13:45:45.907819 1819972 certs.go:480] ignoring /home/jenkins/minikube-integration/20598-1489638/.minikube/certs/1495026_empty.pem, impossibly tiny 0 bytes
	I0407 13:45:45.907827 1819972 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-1489638/.minikube/certs/ca-key.pem (1679 bytes)
	I0407 13:45:45.907851 1819972 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-1489638/.minikube/certs/ca.pem (1082 bytes)
	I0407 13:45:45.907873 1819972 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-1489638/.minikube/certs/cert.pem (1123 bytes)
	I0407 13:45:45.907893 1819972 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-1489638/.minikube/certs/key.pem (1675 bytes)
	I0407 13:45:45.907932 1819972 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-1489638/.minikube/files/etc/ssl/certs/14950262.pem (1708 bytes)
	I0407 13:45:45.908498 1819972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-1489638/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 13:45:45.940610 1819972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-1489638/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0407 13:45:45.967616 1819972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-1489638/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 13:45:46.010354 1819972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-1489638/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0407 13:45:46.058600 1819972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/old-k8s-version-169187/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0407 13:45:46.097263 1819972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/old-k8s-version-169187/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0407 13:45:46.146155 1819972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/old-k8s-version-169187/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 13:45:46.190071 1819972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/old-k8s-version-169187/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0407 13:45:46.220895 1819972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-1489638/.minikube/files/etc/ssl/certs/14950262.pem --> /usr/share/ca-certificates/14950262.pem (1708 bytes)
	I0407 13:45:46.259034 1819972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-1489638/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 13:45:46.285109 1819972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-1489638/.minikube/certs/1495026.pem --> /usr/share/ca-certificates/1495026.pem (1338 bytes)
	I0407 13:45:46.311601 1819972 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0407 13:45:46.330743 1819972 ssh_runner.go:195] Run: openssl version
	I0407 13:45:46.337946 1819972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14950262.pem && ln -fs /usr/share/ca-certificates/14950262.pem /etc/ssl/certs/14950262.pem"
	I0407 13:45:46.347347 1819972 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14950262.pem
	I0407 13:45:46.350728 1819972 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  7 12:57 /usr/share/ca-certificates/14950262.pem
	I0407 13:45:46.350810 1819972 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14950262.pem
	I0407 13:45:46.357631 1819972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14950262.pem /etc/ssl/certs/3ec20f2e.0"
	I0407 13:45:46.366607 1819972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 13:45:46.376047 1819972 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:45:46.379342 1819972 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:50 /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:45:46.379409 1819972 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:45:46.387122 1819972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 13:45:46.396842 1819972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1495026.pem && ln -fs /usr/share/ca-certificates/1495026.pem /etc/ssl/certs/1495026.pem"
	I0407 13:45:46.406421 1819972 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1495026.pem
	I0407 13:45:46.409782 1819972 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  7 12:57 /usr/share/ca-certificates/1495026.pem
	I0407 13:45:46.409875 1819972 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1495026.pem
	I0407 13:45:46.416761 1819972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1495026.pem /etc/ssl/certs/51391683.0"
	I0407 13:45:46.425853 1819972 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 13:45:46.429255 1819972 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0407 13:45:46.436825 1819972 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0407 13:45:46.443732 1819972 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0407 13:45:46.450652 1819972 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0407 13:45:46.457612 1819972 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0407 13:45:46.464594 1819972 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0407 13:45:46.471412 1819972 kubeadm.go:392] StartCluster: {Name:old-k8s-version-169187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-169187 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:45:46.471648 1819972 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0407 13:45:46.491771 1819972 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0407 13:45:46.500534 1819972 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0407 13:45:46.500562 1819972 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0407 13:45:46.500633 1819972 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0407 13:45:46.509213 1819972 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0407 13:45:46.509691 1819972 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-169187" does not appear in /home/jenkins/minikube-integration/20598-1489638/kubeconfig
	I0407 13:45:46.509875 1819972 kubeconfig.go:62] /home/jenkins/minikube-integration/20598-1489638/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-169187" cluster setting kubeconfig missing "old-k8s-version-169187" context setting]
	I0407 13:45:46.510186 1819972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-1489638/kubeconfig: {Name:mk35d977c3a2e102445ffcc403aa71fe5bdeafe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:45:46.511455 1819972 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0407 13:45:46.520633 1819972 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I0407 13:45:46.520665 1819972 kubeadm.go:597] duration metric: took 20.097076ms to restartPrimaryControlPlane
	I0407 13:45:46.520675 1819972 kubeadm.go:394] duration metric: took 49.270487ms to StartCluster
	I0407 13:45:46.520715 1819972 settings.go:142] acquiring lock: {Name:mk7d059a74c0e18dafa1f05777e364166f9e2e1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:45:46.520789 1819972 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20598-1489638/kubeconfig
	I0407 13:45:46.521359 1819972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-1489638/kubeconfig: {Name:mk35d977c3a2e102445ffcc403aa71fe5bdeafe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:45:46.521552 1819972 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0407 13:45:46.521867 1819972 config.go:182] Loaded profile config "old-k8s-version-169187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0407 13:45:46.521910 1819972 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0407 13:45:46.521980 1819972 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-169187"
	I0407 13:45:46.521993 1819972 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-169187"
	W0407 13:45:46.522004 1819972 addons.go:247] addon storage-provisioner should already be in state true
	I0407 13:45:46.522024 1819972 host.go:66] Checking if "old-k8s-version-169187" exists ...
	I0407 13:45:46.522186 1819972 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-169187"
	I0407 13:45:46.522221 1819972 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-169187"
	I0407 13:45:46.522516 1819972 cli_runner.go:164] Run: docker container inspect old-k8s-version-169187 --format={{.State.Status}}
	I0407 13:45:46.522602 1819972 cli_runner.go:164] Run: docker container inspect old-k8s-version-169187 --format={{.State.Status}}
	I0407 13:45:46.523222 1819972 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-169187"
	I0407 13:45:46.523245 1819972 addons.go:238] Setting addon metrics-server=true in "old-k8s-version-169187"
	W0407 13:45:46.523253 1819972 addons.go:247] addon metrics-server should already be in state true
	I0407 13:45:46.523284 1819972 host.go:66] Checking if "old-k8s-version-169187" exists ...
	I0407 13:45:46.523768 1819972 cli_runner.go:164] Run: docker container inspect old-k8s-version-169187 --format={{.State.Status}}
	I0407 13:45:46.526156 1819972 addons.go:69] Setting dashboard=true in profile "old-k8s-version-169187"
	I0407 13:45:46.526246 1819972 addons.go:238] Setting addon dashboard=true in "old-k8s-version-169187"
	W0407 13:45:46.526606 1819972 addons.go:247] addon dashboard should already be in state true
	I0407 13:45:46.526679 1819972 host.go:66] Checking if "old-k8s-version-169187" exists ...
	I0407 13:45:46.526594 1819972 out.go:177] * Verifying Kubernetes components...
	I0407 13:45:46.533287 1819972 cli_runner.go:164] Run: docker container inspect old-k8s-version-169187 --format={{.State.Status}}
	I0407 13:45:46.534331 1819972 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:45:46.573339 1819972 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-169187"
	W0407 13:45:46.573369 1819972 addons.go:247] addon default-storageclass should already be in state true
	I0407 13:45:46.573397 1819972 host.go:66] Checking if "old-k8s-version-169187" exists ...
	I0407 13:45:46.573822 1819972 cli_runner.go:164] Run: docker container inspect old-k8s-version-169187 --format={{.State.Status}}
	I0407 13:45:46.606774 1819972 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 13:45:46.610200 1819972 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 13:45:46.610226 1819972 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0407 13:45:46.610303 1819972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169187
	I0407 13:45:46.617518 1819972 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0407 13:45:46.620134 1819972 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0407 13:45:46.620158 1819972 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0407 13:45:46.620225 1819972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169187
	I0407 13:45:46.620355 1819972 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0407 13:45:46.623180 1819972 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0407 13:45:46.625940 1819972 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0407 13:45:46.625963 1819972 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0407 13:45:46.626032 1819972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169187
	I0407 13:45:46.649007 1819972 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0407 13:45:46.649036 1819972 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0407 13:45:46.649099 1819972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169187
	I0407 13:45:46.684331 1819972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34611 SSHKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/old-k8s-version-169187/id_rsa Username:docker}
	I0407 13:45:46.692435 1819972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34611 SSHKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/old-k8s-version-169187/id_rsa Username:docker}
	I0407 13:45:46.705140 1819972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34611 SSHKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/old-k8s-version-169187/id_rsa Username:docker}
	I0407 13:45:46.719665 1819972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34611 SSHKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/old-k8s-version-169187/id_rsa Username:docker}
	I0407 13:45:46.763170 1819972 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 13:45:46.805887 1819972 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-169187" to be "Ready" ...
	I0407 13:45:46.878898 1819972 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0407 13:45:46.878970 1819972 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0407 13:45:46.901522 1819972 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0407 13:45:46.901547 1819972 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0407 13:45:46.909493 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 13:45:46.922608 1819972 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0407 13:45:46.922634 1819972 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0407 13:45:46.944673 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0407 13:45:46.950659 1819972 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0407 13:45:46.950683 1819972 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0407 13:45:46.974368 1819972 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0407 13:45:46.974393 1819972 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0407 13:45:47.001119 1819972 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0407 13:45:47.001145 1819972 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0407 13:45:47.026054 1819972 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0407 13:45:47.026090 1819972 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0407 13:45:47.070689 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0407 13:45:47.139408 1819972 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0407 13:45:47.139433 1819972 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0407 13:45:47.147296 1819972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:45:47.147334 1819972 retry.go:31] will retry after 137.738646ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:45:47.176660 1819972 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0407 13:45:47.176685 1819972 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0407 13:45:47.195973 1819972 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0407 13:45:47.196004 1819972 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0407 13:45:47.214949 1819972 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0407 13:45:47.214974 1819972 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W0407 13:45:47.251175 1819972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:45:47.251207 1819972 retry.go:31] will retry after 322.914186ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:45:47.274102 1819972 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0407 13:45:47.274127 1819972 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0407 13:45:47.285443 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 13:45:47.340477 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0407 13:45:47.378347 1819972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:45:47.378379 1819972 retry.go:31] will retry after 203.482972ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0407 13:45:47.519573 1819972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:45:47.519612 1819972 retry.go:31] will retry after 498.955382ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0407 13:45:47.560042 1819972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:45:47.560133 1819972 retry.go:31] will retry after 359.490181ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:45:47.575363 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0407 13:45:47.582867 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0407 13:45:47.712275 1819972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:45:47.712374 1819972 retry.go:31] will retry after 426.606451ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0407 13:45:47.752406 1819972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:45:47.752440 1819972 retry.go:31] will retry after 530.335448ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:45:47.920183 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0407 13:45:48.010648 1819972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:45:48.010686 1819972 retry.go:31] will retry after 339.685278ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:45:48.018818 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 13:45:48.139416 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0407 13:45:48.168216 1819972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:45:48.168284 1819972 retry.go:31] will retry after 734.85388ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:45:48.283571 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0407 13:45:48.346381 1819972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:45:48.346436 1819972 retry.go:31] will retry after 735.365017ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:45:48.350553 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0407 13:45:48.526652 1819972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:45:48.526681 1819972 retry.go:31] will retry after 435.394566ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0407 13:45:48.537390 1819972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:45:48.537417 1819972 retry.go:31] will retry after 308.772517ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:45:48.807238 1819972 node_ready.go:53] error getting node "old-k8s-version-169187": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-169187": dial tcp 192.168.76.2:8443: connect: connection refused
	I0407 13:45:48.846571 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0407 13:45:48.904028 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 13:45:48.962901 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0407 13:45:49.036851 1819972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:45:49.036887 1819972 retry.go:31] will retry after 606.867748ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:45:49.082154 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0407 13:45:49.183214 1819972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:45:49.183241 1819972 retry.go:31] will retry after 1.218106895s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0407 13:45:49.216787 1819972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:45:49.216817 1819972 retry.go:31] will retry after 558.290441ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0407 13:45:49.266845 1819972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:45:49.266880 1819972 retry.go:31] will retry after 1.022558809s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:45:49.644688 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0407 13:45:49.775338 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0407 13:45:49.847411 1819972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:45:49.847590 1819972 retry.go:31] will retry after 1.616020397s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0407 13:45:49.915468 1819972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:45:49.915528 1819972 retry.go:31] will retry after 642.299972ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:45:50.289874 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0407 13:45:50.401792 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 13:45:50.558203 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0407 13:45:50.704273 1819972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:45:50.704313 1819972 retry.go:31] will retry after 809.469556ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0407 13:45:50.721883 1819972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:45:50.721918 1819972 retry.go:31] will retry after 960.830005ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0407 13:45:50.938639 1819972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:45:50.938667 1819972 retry.go:31] will retry after 1.060610394s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0407 13:45:51.464031 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0407 13:45:51.514512 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0407 13:45:51.683802 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 13:45:52.000121 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0407 13:45:59.950854 1819972 node_ready.go:49] node "old-k8s-version-169187" has status "Ready":"True"
	I0407 13:45:59.950884 1819972 node_ready.go:38] duration metric: took 13.14496306s for node "old-k8s-version-169187" to be "Ready" ...
	I0407 13:45:59.950896 1819972 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 13:46:00.318686 1819972 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-zpflr" in "kube-system" namespace to be "Ready" ...
	I0407 13:46:00.437587 1819972 pod_ready.go:93] pod "coredns-74ff55c5b-zpflr" in "kube-system" namespace has status "Ready":"True"
	I0407 13:46:00.437664 1819972 pod_ready.go:82] duration metric: took 118.944951ms for pod "coredns-74ff55c5b-zpflr" in "kube-system" namespace to be "Ready" ...
	I0407 13:46:00.437691 1819972 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-169187" in "kube-system" namespace to be "Ready" ...
	I0407 13:46:00.474570 1819972 pod_ready.go:93] pod "etcd-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"True"
	I0407 13:46:00.474645 1819972 pod_ready.go:82] duration metric: took 36.908343ms for pod "etcd-old-k8s-version-169187" in "kube-system" namespace to be "Ready" ...
	I0407 13:46:00.474680 1819972 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-169187" in "kube-system" namespace to be "Ready" ...
	I0407 13:46:01.479854 1819972 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.01575852s)
	I0407 13:46:01.480073 1819972 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (9.965533446s)
	I0407 13:46:01.480341 1819972 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.796513822s)
	I0407 13:46:01.480420 1819972 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.480274795s)
	I0407 13:46:01.480431 1819972 addons.go:479] Verifying addon metrics-server=true in "old-k8s-version-169187"
	I0407 13:46:01.483201 1819972 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-169187 addons enable metrics-server
	
	I0407 13:46:01.488587 1819972 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0407 13:46:01.491749 1819972 addons.go:514] duration metric: took 14.969835351s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0407 13:46:02.479846 1819972 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
	I0407 13:46:04.979140 1819972 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
	I0407 13:46:06.980686 1819972 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
	I0407 13:46:09.480337 1819972 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"True"
	I0407 13:46:09.480358 1819972 pod_ready.go:82] duration metric: took 9.00565624s for pod "kube-apiserver-old-k8s-version-169187" in "kube-system" namespace to be "Ready" ...
	I0407 13:46:09.480374 1819972 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace to be "Ready" ...
	I0407 13:46:11.486698 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
	I0407 13:46:13.986508 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
	I0407 13:46:16.485738 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
	I0407 13:46:19.007617 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
	I0407 13:46:21.485087 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
	I0407 13:46:23.487243 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
	I0407 13:46:25.986791 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
	I0407 13:46:28.486233 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
	I0407 13:46:30.985074 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
	I0407 13:46:32.987247 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
	I0407 13:46:35.486342 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
	I0407 13:46:37.986787 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
	I0407 13:46:40.486108 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
	I0407 13:46:42.486686 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
	I0407 13:46:44.985467 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
	I0407 13:46:46.986540 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
	I0407 13:46:48.992761 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
	I0407 13:46:51.485708 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
	I0407 13:46:53.488061 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
	I0407 13:46:55.986202 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
	I0407 13:46:58.486551 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
	I0407 13:47:00.488834 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
	I0407 13:47:02.989313 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
	I0407 13:47:04.989998 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
	I0407 13:47:07.486888 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
	I0407 13:47:09.992665 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
	I0407 13:47:12.486545 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
	I0407 13:47:14.486592 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
	I0407 13:47:16.986585 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
	I0407 13:47:19.489044 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
	I0407 13:47:21.985350 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
	I0407 13:47:23.985715 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
	I0407 13:47:25.986326 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
	I0407 13:47:27.987398 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
	I0407 13:47:28.985871 1819972 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"True"
	I0407 13:47:28.985898 1819972 pod_ready.go:82] duration metric: took 1m19.50551601s for pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace to be "Ready" ...
	I0407 13:47:28.985912 1819972 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-d8l5m" in "kube-system" namespace to be "Ready" ...
	I0407 13:47:28.990341 1819972 pod_ready.go:93] pod "kube-proxy-d8l5m" in "kube-system" namespace has status "Ready":"True"
	I0407 13:47:28.990366 1819972 pod_ready.go:82] duration metric: took 4.448112ms for pod "kube-proxy-d8l5m" in "kube-system" namespace to be "Ready" ...
	I0407 13:47:28.990378 1819972 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-169187" in "kube-system" namespace to be "Ready" ...
	I0407 13:47:29.000261 1819972 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"True"
	I0407 13:47:29.000287 1819972 pod_ready.go:82] duration metric: took 9.901857ms for pod "kube-scheduler-old-k8s-version-169187" in "kube-system" namespace to be "Ready" ...
	I0407 13:47:29.000299 1819972 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace to be "Ready" ...
	I0407 13:47:31.015306 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:47:33.505236 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:47:35.505498 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:47:37.505592 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:47:40.031733 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:47:42.505868 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:47:44.506299 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:47:47.007309 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:47:49.505231 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:47:51.505641 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:47:53.506057 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:47:56.008002 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:47:58.506307 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:48:00.543344 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:48:03.007218 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:48:05.009829 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:48:07.506066 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:48:09.511731 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:48:12.010529 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:48:14.505657 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:48:17.006488 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:48:19.505471 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:48:21.505795 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:48:24.009585 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:48:26.505995 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:48:29.005067 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:48:31.013227 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:48:33.505459 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:48:35.506271 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:48:38.009021 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:48:40.016937 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:48:42.505303 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:48:44.505496 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:48:46.505979 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:48:49.011069 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:48:51.505964 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:48:53.506244 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:48:55.506390 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:48:58.009508 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:49:00.028928 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:49:02.505916 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:49:05.008679 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:49:07.505851 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:49:09.505950 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:49:11.506765 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:49:14.008686 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:49:16.010395 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:49:18.506565 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:49:21.008369 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:49:23.509187 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:49:26.006852 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:49:28.506387 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:49:31.017283 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:49:33.505743 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:49:35.506060 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:49:38.011236 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:49:40.506321 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:49:43.007067 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:49:45.011190 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:49:47.507746 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:49:50.009909 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:49:52.012778 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:49:54.506025 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:49:57.007303 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:49:59.505257 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:01.577654 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:04.006322 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:06.010917 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:08.506143 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:10.510360 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:13.007003 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:15.021761 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:17.506342 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:20.018190 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:22.512125 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:25.015143 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:27.506468 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:30.018264 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:32.506128 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:35.009129 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:37.014036 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:39.505741 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:41.506092 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:44.006600 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:46.007826 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:48.008260 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:50.015194 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:52.041098 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:54.505268 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:56.505734 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:58.505969 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:51:00.506749 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:51:03.019804 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:51:05.506022 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:51:08.009389 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:51:10.014918 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:51:12.506567 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:51:15.021540 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:51:17.505345 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:51:19.506004 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:51:22.009002 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:51:24.009399 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:51:26.510537 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:51:29.006696 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
	I0407 13:51:29.006729 1819972 pod_ready.go:82] duration metric: took 4m0.006422181s for pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace to be "Ready" ...
	E0407 13:51:29.006738 1819972 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0407 13:51:29.006746 1819972 pod_ready.go:39] duration metric: took 5m29.055838962s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 13:51:29.006765 1819972 api_server.go:52] waiting for apiserver process to appear ...
	I0407 13:51:29.006855 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0407 13:51:29.026509 1819972 logs.go:282] 2 containers: [82525be035b3 78f8992ce8b4]
	I0407 13:51:29.026600 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0407 13:51:29.048568 1819972 logs.go:282] 2 containers: [b45737d73f96 f4fcf1ba0dce]
	I0407 13:51:29.048651 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0407 13:51:29.067406 1819972 logs.go:282] 2 containers: [a2086baae207 d92117844997]
	I0407 13:51:29.067532 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0407 13:51:29.087437 1819972 logs.go:282] 2 containers: [fce53c7f2eb0 3a9781764312]
	I0407 13:51:29.087614 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0407 13:51:29.106213 1819972 logs.go:282] 2 containers: [062895b6a45a 7cb4581969c6]
	I0407 13:51:29.106301 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0407 13:51:29.124943 1819972 logs.go:282] 2 containers: [c2da54d5c256 3e48a853c03b]
	I0407 13:51:29.125036 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0407 13:51:29.143294 1819972 logs.go:282] 0 containers: []
	W0407 13:51:29.143365 1819972 logs.go:284] No container was found matching "kindnet"
	I0407 13:51:29.143439 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0407 13:51:29.164003 1819972 logs.go:282] 1 containers: [c66d59ac00e0]
	I0407 13:51:29.164083 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0407 13:51:29.186403 1819972 logs.go:282] 2 containers: [55bf8eb1ab94 fcbefe8497a0]
	I0407 13:51:29.186436 1819972 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:51:29.186448 1819972 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0407 13:51:29.353217 1819972 logs.go:123] Gathering logs for coredns [d92117844997] ...
	I0407 13:51:29.353251 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d92117844997"
	I0407 13:51:29.387072 1819972 logs.go:123] Gathering logs for container status ...
	I0407 13:51:29.387100 1819972 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:51:29.455821 1819972 logs.go:123] Gathering logs for coredns [a2086baae207] ...
	I0407 13:51:29.455854 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2086baae207"
	I0407 13:51:29.477100 1819972 logs.go:123] Gathering logs for kube-scheduler [fce53c7f2eb0] ...
	I0407 13:51:29.477128 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fce53c7f2eb0"
	I0407 13:51:29.501934 1819972 logs.go:123] Gathering logs for kube-scheduler [3a9781764312] ...
	I0407 13:51:29.501962 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9781764312"
	I0407 13:51:29.528570 1819972 logs.go:123] Gathering logs for kube-proxy [7cb4581969c6] ...
	I0407 13:51:29.528725 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cb4581969c6"
	I0407 13:51:29.553260 1819972 logs.go:123] Gathering logs for kube-controller-manager [c2da54d5c256] ...
	I0407 13:51:29.553288 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2da54d5c256"
	I0407 13:51:29.596765 1819972 logs.go:123] Gathering logs for kube-controller-manager [3e48a853c03b] ...
	I0407 13:51:29.596803 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e48a853c03b"
	I0407 13:51:29.647057 1819972 logs.go:123] Gathering logs for storage-provisioner [55bf8eb1ab94] ...
	I0407 13:51:29.647091 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55bf8eb1ab94"
	I0407 13:51:29.684442 1819972 logs.go:123] Gathering logs for storage-provisioner [fcbefe8497a0] ...
	I0407 13:51:29.684472 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcbefe8497a0"
	I0407 13:51:29.710207 1819972 logs.go:123] Gathering logs for kube-apiserver [78f8992ce8b4] ...
	I0407 13:51:29.710235 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78f8992ce8b4"
	I0407 13:51:29.783435 1819972 logs.go:123] Gathering logs for kube-proxy [062895b6a45a] ...
	I0407 13:51:29.783469 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 062895b6a45a"
	I0407 13:51:29.805438 1819972 logs.go:123] Gathering logs for kubernetes-dashboard [c66d59ac00e0] ...
	I0407 13:51:29.805467 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66d59ac00e0"
	I0407 13:51:29.827642 1819972 logs.go:123] Gathering logs for Docker ...
	I0407 13:51:29.827670 1819972 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0407 13:51:29.860550 1819972 logs.go:123] Gathering logs for kubelet ...
	I0407 13:51:29.860582 1819972 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0407 13:51:29.925833 1819972 logs.go:138] Found kubelet problem: Apr 07 13:45:59 old-k8s-version-169187 kubelet[1477]: E0407 13:45:59.905405    1477 reflector.go:138] object-"default"/"default-token-n6f2l": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-n6f2l" is forbidden: User "system:node:old-k8s-version-169187" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-169187' and this object
	W0407 13:51:29.926096 1819972 logs.go:138] Found kubelet problem: Apr 07 13:45:59 old-k8s-version-169187 kubelet[1477]: E0407 13:45:59.905477    1477 reflector.go:138] object-"kube-system"/"kube-proxy-token-lqq9d": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-lqq9d" is forbidden: User "system:node:old-k8s-version-169187" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-169187' and this object
	W0407 13:51:29.926308 1819972 logs.go:138] Found kubelet problem: Apr 07 13:45:59 old-k8s-version-169187 kubelet[1477]: E0407 13:45:59.905533    1477 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-169187" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-169187' and this object
	W0407 13:51:29.926617 1819972 logs.go:138] Found kubelet problem: Apr 07 13:45:59 old-k8s-version-169187 kubelet[1477]: E0407 13:45:59.905580    1477 reflector.go:138] object-"kube-system"/"storage-provisioner-token-cxdxc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-cxdxc" is forbidden: User "system:node:old-k8s-version-169187" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-169187' and this object
	W0407 13:51:29.926851 1819972 logs.go:138] Found kubelet problem: Apr 07 13:45:59 old-k8s-version-169187 kubelet[1477]: E0407 13:45:59.905695    1477 reflector.go:138] object-"kube-system"/"coredns-token-tttv5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-tttv5" is forbidden: User "system:node:old-k8s-version-169187" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-169187' and this object
	W0407 13:51:29.927056 1819972 logs.go:138] Found kubelet problem: Apr 07 13:45:59 old-k8s-version-169187 kubelet[1477]: E0407 13:45:59.906374    1477 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-169187" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-169187' and this object
	W0407 13:51:29.933879 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:01 old-k8s-version-169187 kubelet[1477]: E0407 13:46:01.360618    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0407 13:51:29.934551 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:02 old-k8s-version-169187 kubelet[1477]: E0407 13:46:02.348740    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:29.935072 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:03 old-k8s-version-169187 kubelet[1477]: E0407 13:46:03.359622    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:29.937497 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:15 old-k8s-version-169187 kubelet[1477]: E0407 13:46:15.664641    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0407 13:51:29.942139 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:20 old-k8s-version-169187 kubelet[1477]: E0407 13:46:20.047413    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0407 13:51:29.942704 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:20 old-k8s-version-169187 kubelet[1477]: E0407 13:46:20.686892    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:29.942904 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:21 old-k8s-version-169187 kubelet[1477]: E0407 13:46:21.704280    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:29.943269 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:26 old-k8s-version-169187 kubelet[1477]: E0407 13:46:26.633503    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:29.943921 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:32 old-k8s-version-169187 kubelet[1477]: E0407 13:46:32.852394    1477 pod_workers.go:191] Error syncing pod 799a1ac5-a9e9-4fd4-b152-afc0c2012231 ("storage-provisioner_kube-system(799a1ac5-a9e9-4fd4-b152-afc0c2012231)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(799a1ac5-a9e9-4fd4-b152-afc0c2012231)"
	W0407 13:51:29.946305 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:34 old-k8s-version-169187 kubelet[1477]: E0407 13:46:34.079950    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0407 13:51:29.948727 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:40 old-k8s-version-169187 kubelet[1477]: E0407 13:46:40.651036    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0407 13:51:29.948928 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:45 old-k8s-version-169187 kubelet[1477]: E0407 13:46:45.642017    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:29.949249 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:52 old-k8s-version-169187 kubelet[1477]: E0407 13:46:52.633525    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:29.951501 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:00 old-k8s-version-169187 kubelet[1477]: E0407 13:47:00.133509    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0407 13:51:29.951688 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:05 old-k8s-version-169187 kubelet[1477]: E0407 13:47:05.658773    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:29.951887 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:14 old-k8s-version-169187 kubelet[1477]: E0407 13:47:14.633956    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:29.952074 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:19 old-k8s-version-169187 kubelet[1477]: E0407 13:47:19.638120    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:29.952273 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:28 old-k8s-version-169187 kubelet[1477]: E0407 13:47:28.644719    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:29.954365 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:34 old-k8s-version-169187 kubelet[1477]: E0407 13:47:34.650220    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0407 13:51:29.956620 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:43 old-k8s-version-169187 kubelet[1477]: E0407 13:47:43.179657    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0407 13:51:29.956807 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:47 old-k8s-version-169187 kubelet[1477]: E0407 13:47:47.633825    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:29.957004 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:54 old-k8s-version-169187 kubelet[1477]: E0407 13:47:54.633728    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:29.957190 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:58 old-k8s-version-169187 kubelet[1477]: E0407 13:47:58.633722    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:29.957389 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:07 old-k8s-version-169187 kubelet[1477]: E0407 13:48:07.636662    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:29.957578 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:09 old-k8s-version-169187 kubelet[1477]: E0407 13:48:09.636763    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:29.957776 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:22 old-k8s-version-169187 kubelet[1477]: E0407 13:48:22.633343    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:29.957964 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:24 old-k8s-version-169187 kubelet[1477]: E0407 13:48:24.633503    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:29.958163 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:35 old-k8s-version-169187 kubelet[1477]: E0407 13:48:35.643057    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:29.958349 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:38 old-k8s-version-169187 kubelet[1477]: E0407 13:48:38.633424    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:29.958546 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:47 old-k8s-version-169187 kubelet[1477]: E0407 13:48:47.633680    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:29.958732 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:53 old-k8s-version-169187 kubelet[1477]: E0407 13:48:53.633745    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:29.958929 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:02 old-k8s-version-169187 kubelet[1477]: E0407 13:49:02.633346    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:29.961032 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:05 old-k8s-version-169187 kubelet[1477]: E0407 13:49:05.657505    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0407 13:51:29.961220 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:17 old-k8s-version-169187 kubelet[1477]: E0407 13:49:17.634169    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:29.963463 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:18 old-k8s-version-169187 kubelet[1477]: E0407 13:49:18.080057    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0407 13:51:29.963688 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:29 old-k8s-version-169187 kubelet[1477]: E0407 13:49:29.638278    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:29.963877 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:30 old-k8s-version-169187 kubelet[1477]: E0407 13:49:30.638576    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:29.964075 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:41 old-k8s-version-169187 kubelet[1477]: E0407 13:49:41.654838    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:29.964262 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:43 old-k8s-version-169187 kubelet[1477]: E0407 13:49:43.636263    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:29.964459 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:56 old-k8s-version-169187 kubelet[1477]: E0407 13:49:56.633358    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:29.964644 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:57 old-k8s-version-169187 kubelet[1477]: E0407 13:49:57.633386    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:29.964842 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:07 old-k8s-version-169187 kubelet[1477]: E0407 13:50:07.638473    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:29.965027 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:12 old-k8s-version-169187 kubelet[1477]: E0407 13:50:12.633385    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:29.965226 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:22 old-k8s-version-169187 kubelet[1477]: E0407 13:50:22.633440    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:29.965411 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:25 old-k8s-version-169187 kubelet[1477]: E0407 13:50:25.633558    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:29.965615 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:37 old-k8s-version-169187 kubelet[1477]: E0407 13:50:37.636391    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:29.965804 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:39 old-k8s-version-169187 kubelet[1477]: E0407 13:50:39.634229    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:29.965989 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:51 old-k8s-version-169187 kubelet[1477]: E0407 13:50:51.633542    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:29.966187 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:51 old-k8s-version-169187 kubelet[1477]: E0407 13:50:51.643349    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:29.966384 1819972 logs.go:138] Found kubelet problem: Apr 07 13:51:03 old-k8s-version-169187 kubelet[1477]: E0407 13:51:03.636916    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:29.966569 1819972 logs.go:138] Found kubelet problem: Apr 07 13:51:06 old-k8s-version-169187 kubelet[1477]: E0407 13:51:06.633346    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:29.966766 1819972 logs.go:138] Found kubelet problem: Apr 07 13:51:17 old-k8s-version-169187 kubelet[1477]: E0407 13:51:17.638234    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:29.966952 1819972 logs.go:138] Found kubelet problem: Apr 07 13:51:19 old-k8s-version-169187 kubelet[1477]: E0407 13:51:19.643952    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0407 13:51:29.966966 1819972 logs.go:123] Gathering logs for dmesg ...
	I0407 13:51:29.966985 1819972 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:51:29.986242 1819972 logs.go:123] Gathering logs for kube-apiserver [82525be035b3] ...
	I0407 13:51:29.986276 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82525be035b3"
	I0407 13:51:30.086153 1819972 logs.go:123] Gathering logs for etcd [b45737d73f96] ...
	I0407 13:51:30.086196 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b45737d73f96"
	I0407 13:51:30.124690 1819972 logs.go:123] Gathering logs for etcd [f4fcf1ba0dce] ...
	I0407 13:51:30.124744 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4fcf1ba0dce"
	I0407 13:51:30.155699 1819972 out.go:358] Setting ErrFile to fd 2...
	I0407 13:51:30.155727 1819972 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0407 13:51:30.155790 1819972 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0407 13:51:30.155801 1819972 out.go:270]   Apr 07 13:50:51 old-k8s-version-169187 kubelet[1477]: E0407 13:50:51.643349    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Apr 07 13:50:51 old-k8s-version-169187 kubelet[1477]: E0407 13:50:51.643349    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:30.155808 1819972 out.go:270]   Apr 07 13:51:03 old-k8s-version-169187 kubelet[1477]: E0407 13:51:03.636916    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Apr 07 13:51:03 old-k8s-version-169187 kubelet[1477]: E0407 13:51:03.636916    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:30.155886 1819972 out.go:270]   Apr 07 13:51:06 old-k8s-version-169187 kubelet[1477]: E0407 13:51:06.633346    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Apr 07 13:51:06 old-k8s-version-169187 kubelet[1477]: E0407 13:51:06.633346    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:30.155894 1819972 out.go:270]   Apr 07 13:51:17 old-k8s-version-169187 kubelet[1477]: E0407 13:51:17.638234    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Apr 07 13:51:17 old-k8s-version-169187 kubelet[1477]: E0407 13:51:17.638234    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:30.155899 1819972 out.go:270]   Apr 07 13:51:19 old-k8s-version-169187 kubelet[1477]: E0407 13:51:19.643952    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Apr 07 13:51:19 old-k8s-version-169187 kubelet[1477]: E0407 13:51:19.643952    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0407 13:51:30.155904 1819972 out.go:358] Setting ErrFile to fd 2...
	I0407 13:51:30.155909 1819972 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:51:40.157136 1819972 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:51:40.169890 1819972 api_server.go:72] duration metric: took 5m53.648300907s to wait for apiserver process to appear ...
	I0407 13:51:40.169915 1819972 api_server.go:88] waiting for apiserver healthz status ...
	I0407 13:51:40.170006 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0407 13:51:40.192333 1819972 logs.go:282] 2 containers: [82525be035b3 78f8992ce8b4]
	I0407 13:51:40.192413 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0407 13:51:40.216209 1819972 logs.go:282] 2 containers: [b45737d73f96 f4fcf1ba0dce]
	I0407 13:51:40.216295 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0407 13:51:40.239039 1819972 logs.go:282] 2 containers: [a2086baae207 d92117844997]
	I0407 13:51:40.239125 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0407 13:51:40.262354 1819972 logs.go:282] 2 containers: [fce53c7f2eb0 3a9781764312]
	I0407 13:51:40.262433 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0407 13:51:40.283816 1819972 logs.go:282] 2 containers: [062895b6a45a 7cb4581969c6]
	I0407 13:51:40.283903 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0407 13:51:40.305544 1819972 logs.go:282] 2 containers: [c2da54d5c256 3e48a853c03b]
	I0407 13:51:40.305632 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0407 13:51:40.326317 1819972 logs.go:282] 0 containers: []
	W0407 13:51:40.326339 1819972 logs.go:284] No container was found matching "kindnet"
	I0407 13:51:40.326394 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0407 13:51:40.346269 1819972 logs.go:282] 2 containers: [55bf8eb1ab94 fcbefe8497a0]
	I0407 13:51:40.346404 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0407 13:51:40.368221 1819972 logs.go:282] 1 containers: [c66d59ac00e0]
	I0407 13:51:40.368255 1819972 logs.go:123] Gathering logs for etcd [b45737d73f96] ...
	I0407 13:51:40.368267 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b45737d73f96"
	I0407 13:51:40.405468 1819972 logs.go:123] Gathering logs for coredns [a2086baae207] ...
	I0407 13:51:40.405507 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2086baae207"
	I0407 13:51:40.429912 1819972 logs.go:123] Gathering logs for kube-scheduler [3a9781764312] ...
	I0407 13:51:40.429941 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9781764312"
	I0407 13:51:40.463734 1819972 logs.go:123] Gathering logs for kube-proxy [7cb4581969c6] ...
	I0407 13:51:40.463768 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cb4581969c6"
	I0407 13:51:40.490415 1819972 logs.go:123] Gathering logs for kubernetes-dashboard [c66d59ac00e0] ...
	I0407 13:51:40.490443 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66d59ac00e0"
	I0407 13:51:40.524475 1819972 logs.go:123] Gathering logs for kubelet ...
	I0407 13:51:40.524504 1819972 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0407 13:51:40.585432 1819972 logs.go:138] Found kubelet problem: Apr 07 13:45:59 old-k8s-version-169187 kubelet[1477]: E0407 13:45:59.905405    1477 reflector.go:138] object-"default"/"default-token-n6f2l": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-n6f2l" is forbidden: User "system:node:old-k8s-version-169187" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-169187' and this object
	W0407 13:51:40.585693 1819972 logs.go:138] Found kubelet problem: Apr 07 13:45:59 old-k8s-version-169187 kubelet[1477]: E0407 13:45:59.905477    1477 reflector.go:138] object-"kube-system"/"kube-proxy-token-lqq9d": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-lqq9d" is forbidden: User "system:node:old-k8s-version-169187" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-169187' and this object
	W0407 13:51:40.585903 1819972 logs.go:138] Found kubelet problem: Apr 07 13:45:59 old-k8s-version-169187 kubelet[1477]: E0407 13:45:59.905533    1477 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-169187" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-169187' and this object
	W0407 13:51:40.586131 1819972 logs.go:138] Found kubelet problem: Apr 07 13:45:59 old-k8s-version-169187 kubelet[1477]: E0407 13:45:59.905580    1477 reflector.go:138] object-"kube-system"/"storage-provisioner-token-cxdxc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-cxdxc" is forbidden: User "system:node:old-k8s-version-169187" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-169187' and this object
	W0407 13:51:40.586341 1819972 logs.go:138] Found kubelet problem: Apr 07 13:45:59 old-k8s-version-169187 kubelet[1477]: E0407 13:45:59.905695    1477 reflector.go:138] object-"kube-system"/"coredns-token-tttv5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-tttv5" is forbidden: User "system:node:old-k8s-version-169187" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-169187' and this object
	W0407 13:51:40.586541 1819972 logs.go:138] Found kubelet problem: Apr 07 13:45:59 old-k8s-version-169187 kubelet[1477]: E0407 13:45:59.906374    1477 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-169187" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-169187' and this object
	W0407 13:51:40.593243 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:01 old-k8s-version-169187 kubelet[1477]: E0407 13:46:01.360618    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0407 13:51:40.593907 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:02 old-k8s-version-169187 kubelet[1477]: E0407 13:46:02.348740    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.594420 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:03 old-k8s-version-169187 kubelet[1477]: E0407 13:46:03.359622    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.596854 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:15 old-k8s-version-169187 kubelet[1477]: E0407 13:46:15.664641    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0407 13:51:40.601421 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:20 old-k8s-version-169187 kubelet[1477]: E0407 13:46:20.047413    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0407 13:51:40.601983 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:20 old-k8s-version-169187 kubelet[1477]: E0407 13:46:20.686892    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.602183 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:21 old-k8s-version-169187 kubelet[1477]: E0407 13:46:21.704280    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.602545 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:26 old-k8s-version-169187 kubelet[1477]: E0407 13:46:26.633503    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.603191 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:32 old-k8s-version-169187 kubelet[1477]: E0407 13:46:32.852394    1477 pod_workers.go:191] Error syncing pod 799a1ac5-a9e9-4fd4-b152-afc0c2012231 ("storage-provisioner_kube-system(799a1ac5-a9e9-4fd4-b152-afc0c2012231)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(799a1ac5-a9e9-4fd4-b152-afc0c2012231)"
	W0407 13:51:40.605554 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:34 old-k8s-version-169187 kubelet[1477]: E0407 13:46:34.079950    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0407 13:51:40.607980 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:40 old-k8s-version-169187 kubelet[1477]: E0407 13:46:40.651036    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0407 13:51:40.608182 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:45 old-k8s-version-169187 kubelet[1477]: E0407 13:46:45.642017    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.608499 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:52 old-k8s-version-169187 kubelet[1477]: E0407 13:46:52.633525    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.610729 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:00 old-k8s-version-169187 kubelet[1477]: E0407 13:47:00.133509    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0407 13:51:40.610915 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:05 old-k8s-version-169187 kubelet[1477]: E0407 13:47:05.658773    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.611111 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:14 old-k8s-version-169187 kubelet[1477]: E0407 13:47:14.633956    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.611295 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:19 old-k8s-version-169187 kubelet[1477]: E0407 13:47:19.638120    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.611490 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:28 old-k8s-version-169187 kubelet[1477]: E0407 13:47:28.644719    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.613571 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:34 old-k8s-version-169187 kubelet[1477]: E0407 13:47:34.650220    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0407 13:51:40.615805 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:43 old-k8s-version-169187 kubelet[1477]: E0407 13:47:43.179657    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0407 13:51:40.615992 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:47 old-k8s-version-169187 kubelet[1477]: E0407 13:47:47.633825    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.616188 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:54 old-k8s-version-169187 kubelet[1477]: E0407 13:47:54.633728    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.616374 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:58 old-k8s-version-169187 kubelet[1477]: E0407 13:47:58.633722    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.616569 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:07 old-k8s-version-169187 kubelet[1477]: E0407 13:48:07.636662    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.616749 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:09 old-k8s-version-169187 kubelet[1477]: E0407 13:48:09.636763    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.616941 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:22 old-k8s-version-169187 kubelet[1477]: E0407 13:48:22.633343    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.617122 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:24 old-k8s-version-169187 kubelet[1477]: E0407 13:48:24.633503    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.617319 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:35 old-k8s-version-169187 kubelet[1477]: E0407 13:48:35.643057    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.617507 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:38 old-k8s-version-169187 kubelet[1477]: E0407 13:48:38.633424    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.617702 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:47 old-k8s-version-169187 kubelet[1477]: E0407 13:48:47.633680    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.617885 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:53 old-k8s-version-169187 kubelet[1477]: E0407 13:48:53.633745    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.618081 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:02 old-k8s-version-169187 kubelet[1477]: E0407 13:49:02.633346    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.620175 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:05 old-k8s-version-169187 kubelet[1477]: E0407 13:49:05.657505    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0407 13:51:40.620361 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:17 old-k8s-version-169187 kubelet[1477]: E0407 13:49:17.634169    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.622592 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:18 old-k8s-version-169187 kubelet[1477]: E0407 13:49:18.080057    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0407 13:51:40.622790 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:29 old-k8s-version-169187 kubelet[1477]: E0407 13:49:29.638278    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.622975 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:30 old-k8s-version-169187 kubelet[1477]: E0407 13:49:30.638576    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.623170 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:41 old-k8s-version-169187 kubelet[1477]: E0407 13:49:41.654838    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.623354 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:43 old-k8s-version-169187 kubelet[1477]: E0407 13:49:43.636263    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.623557 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:56 old-k8s-version-169187 kubelet[1477]: E0407 13:49:56.633358    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.623742 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:57 old-k8s-version-169187 kubelet[1477]: E0407 13:49:57.633386    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.623944 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:07 old-k8s-version-169187 kubelet[1477]: E0407 13:50:07.638473    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.624128 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:12 old-k8s-version-169187 kubelet[1477]: E0407 13:50:12.633385    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.624326 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:22 old-k8s-version-169187 kubelet[1477]: E0407 13:50:22.633440    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.624510 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:25 old-k8s-version-169187 kubelet[1477]: E0407 13:50:25.633558    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.624729 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:37 old-k8s-version-169187 kubelet[1477]: E0407 13:50:37.636391    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.624919 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:39 old-k8s-version-169187 kubelet[1477]: E0407 13:50:39.634229    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.625102 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:51 old-k8s-version-169187 kubelet[1477]: E0407 13:50:51.633542    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.625298 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:51 old-k8s-version-169187 kubelet[1477]: E0407 13:50:51.643349    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.625497 1819972 logs.go:138] Found kubelet problem: Apr 07 13:51:03 old-k8s-version-169187 kubelet[1477]: E0407 13:51:03.636916    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.625681 1819972 logs.go:138] Found kubelet problem: Apr 07 13:51:06 old-k8s-version-169187 kubelet[1477]: E0407 13:51:06.633346    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.625877 1819972 logs.go:138] Found kubelet problem: Apr 07 13:51:17 old-k8s-version-169187 kubelet[1477]: E0407 13:51:17.638234    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.626063 1819972 logs.go:138] Found kubelet problem: Apr 07 13:51:19 old-k8s-version-169187 kubelet[1477]: E0407 13:51:19.643952    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.626260 1819972 logs.go:138] Found kubelet problem: Apr 07 13:51:32 old-k8s-version-169187 kubelet[1477]: E0407 13:51:32.635754    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.626442 1819972 logs.go:138] Found kubelet problem: Apr 07 13:51:33 old-k8s-version-169187 kubelet[1477]: E0407 13:51:33.635745    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0407 13:51:40.626453 1819972 logs.go:123] Gathering logs for kube-apiserver [82525be035b3] ...
	I0407 13:51:40.626467 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82525be035b3"
	I0407 13:51:40.692053 1819972 logs.go:123] Gathering logs for etcd [f4fcf1ba0dce] ...
	I0407 13:51:40.692095 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4fcf1ba0dce"
	I0407 13:51:40.730914 1819972 logs.go:123] Gathering logs for kube-scheduler [fce53c7f2eb0] ...
	I0407 13:51:40.731004 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fce53c7f2eb0"
	I0407 13:51:40.756163 1819972 logs.go:123] Gathering logs for kube-proxy [062895b6a45a] ...
	I0407 13:51:40.756192 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 062895b6a45a"
	I0407 13:51:40.779888 1819972 logs.go:123] Gathering logs for container status ...
	I0407 13:51:40.779915 1819972 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:51:40.830573 1819972 logs.go:123] Gathering logs for kube-controller-manager [c2da54d5c256] ...
	I0407 13:51:40.830603 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2da54d5c256"
	I0407 13:51:40.883151 1819972 logs.go:123] Gathering logs for dmesg ...
	I0407 13:51:40.883193 1819972 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:51:40.899720 1819972 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:51:40.899749 1819972 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0407 13:51:41.056544 1819972 logs.go:123] Gathering logs for coredns [d92117844997] ...
	I0407 13:51:41.056575 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d92117844997"
	I0407 13:51:41.081183 1819972 logs.go:123] Gathering logs for storage-provisioner [55bf8eb1ab94] ...
	I0407 13:51:41.081212 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55bf8eb1ab94"
	I0407 13:51:41.104294 1819972 logs.go:123] Gathering logs for storage-provisioner [fcbefe8497a0] ...
	I0407 13:51:41.104324 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcbefe8497a0"
	I0407 13:51:41.126590 1819972 logs.go:123] Gathering logs for Docker ...
	I0407 13:51:41.126619 1819972 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0407 13:51:41.152068 1819972 logs.go:123] Gathering logs for kube-controller-manager [3e48a853c03b] ...
	I0407 13:51:41.152100 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e48a853c03b"
	I0407 13:51:41.190557 1819972 logs.go:123] Gathering logs for kube-apiserver [78f8992ce8b4] ...
	I0407 13:51:41.190633 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78f8992ce8b4"
	I0407 13:51:41.268535 1819972 out.go:358] Setting ErrFile to fd 2...
	I0407 13:51:41.268567 1819972 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0407 13:51:41.268629 1819972 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0407 13:51:41.268642 1819972 out.go:270]   Apr 07 13:51:06 old-k8s-version-169187 kubelet[1477]: E0407 13:51:06.633346    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Apr 07 13:51:06 old-k8s-version-169187 kubelet[1477]: E0407 13:51:06.633346    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:41.268652 1819972 out.go:270]   Apr 07 13:51:17 old-k8s-version-169187 kubelet[1477]: E0407 13:51:17.638234    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Apr 07 13:51:17 old-k8s-version-169187 kubelet[1477]: E0407 13:51:17.638234    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:41.268659 1819972 out.go:270]   Apr 07 13:51:19 old-k8s-version-169187 kubelet[1477]: E0407 13:51:19.643952    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Apr 07 13:51:19 old-k8s-version-169187 kubelet[1477]: E0407 13:51:19.643952    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:41.268664 1819972 out.go:270]   Apr 07 13:51:32 old-k8s-version-169187 kubelet[1477]: E0407 13:51:32.635754    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Apr 07 13:51:32 old-k8s-version-169187 kubelet[1477]: E0407 13:51:32.635754    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:41.268672 1819972 out.go:270]   Apr 07 13:51:33 old-k8s-version-169187 kubelet[1477]: E0407 13:51:33.635745    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Apr 07 13:51:33 old-k8s-version-169187 kubelet[1477]: E0407 13:51:33.635745    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0407 13:51:41.268683 1819972 out.go:358] Setting ErrFile to fd 2...
	I0407 13:51:41.268688 1819972 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:51:51.270356 1819972 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0407 13:51:51.280041 1819972 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0407 13:51:51.283661 1819972 out.go:201] 
	W0407 13:51:51.286521 1819972 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0407 13:51:51.286680 1819972 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0407 13:51:51.286738 1819972 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0407 13:51:51.286781 1819972 out.go:270] * 
	* 
	W0407 13:51:51.287792 1819972 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0407 13:51:51.291196 1819972 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-169187 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-169187
helpers_test.go:235: (dbg) docker inspect old-k8s-version-169187:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "685e713d0440990ac7eaa4f04d0b10b9ef68187d9df8fa217a8bbe7f1d9b838f",
	        "Created": "2025-04-07T13:43:02.411484895Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1820102,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-04-07T13:45:37.789338054Z",
	            "FinishedAt": "2025-04-07T13:45:36.542041628Z"
	        },
	        "Image": "sha256:1a97cd9e9bbab266425b883d3ed87fee4969302ed9a49ce4df4bf460f6bbf404",
	        "ResolvConfPath": "/var/lib/docker/containers/685e713d0440990ac7eaa4f04d0b10b9ef68187d9df8fa217a8bbe7f1d9b838f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/685e713d0440990ac7eaa4f04d0b10b9ef68187d9df8fa217a8bbe7f1d9b838f/hostname",
	        "HostsPath": "/var/lib/docker/containers/685e713d0440990ac7eaa4f04d0b10b9ef68187d9df8fa217a8bbe7f1d9b838f/hosts",
	        "LogPath": "/var/lib/docker/containers/685e713d0440990ac7eaa4f04d0b10b9ef68187d9df8fa217a8bbe7f1d9b838f/685e713d0440990ac7eaa4f04d0b10b9ef68187d9df8fa217a8bbe7f1d9b838f-json.log",
	        "Name": "/old-k8s-version-169187",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-169187:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-169187",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "685e713d0440990ac7eaa4f04d0b10b9ef68187d9df8fa217a8bbe7f1d9b838f",
	                "LowerDir": "/var/lib/docker/overlay2/7c8ee11aeeb34c5fedcbaebec6ba343fe606838c44b7d26de5867ee0103fb670-init/diff:/var/lib/docker/overlay2/2fffce34c50e77173db4df34163cc0f451b50794e01d4ae821270ba6f3468b6b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7c8ee11aeeb34c5fedcbaebec6ba343fe606838c44b7d26de5867ee0103fb670/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7c8ee11aeeb34c5fedcbaebec6ba343fe606838c44b7d26de5867ee0103fb670/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7c8ee11aeeb34c5fedcbaebec6ba343fe606838c44b7d26de5867ee0103fb670/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-169187",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-169187/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-169187",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-169187",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-169187",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9892ed8f0235061ccb7c524b9650d9f6612ddc6e9d4b8c5e22a969c98e67de8f",
	            "SandboxKey": "/var/run/docker/netns/9892ed8f0235",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34611"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34612"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34615"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34613"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34614"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-169187": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:c6:fc:25:51:94",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "760c8c32a067b6850b152d6c6a7ed72d95e95fc7589c598a629781795e2c2278",
	                    "EndpointID": "6d240c4f72f351da7768bf9e2bab94c1998c8f98dbb6bb04bd06a5533056cd17",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-169187",
	                        "685e713d0440"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-169187 -n old-k8s-version-169187
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-169187 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-169187 logs -n 25: (2.023512114s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | docker-flags-055908 ssh                                | docker-flags-055908          | jenkins | v1.35.0 | 07 Apr 25 13:42 UTC | 07 Apr 25 13:42 UTC |
	|         | sudo systemctl show docker                             |                              |         |         |                     |                     |
	|         | --property=Environment                                 |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | docker-flags-055908 ssh                                | docker-flags-055908          | jenkins | v1.35.0 | 07 Apr 25 13:42 UTC | 07 Apr 25 13:42 UTC |
	|         | sudo systemctl show docker                             |                              |         |         |                     |                     |
	|         | --property=ExecStart                                   |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| delete  | -p docker-flags-055908                                 | docker-flags-055908          | jenkins | v1.35.0 | 07 Apr 25 13:42 UTC | 07 Apr 25 13:42 UTC |
	| start   | -p cert-options-925217                                 | cert-options-925217          | jenkins | v1.35.0 | 07 Apr 25 13:42 UTC | 07 Apr 25 13:42 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                              |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                              |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                              |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                              |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=docker                             |                              |         |         |                     |                     |
	| ssh     | cert-options-925217 ssh                                | cert-options-925217          | jenkins | v1.35.0 | 07 Apr 25 13:42 UTC | 07 Apr 25 13:42 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-925217 -- sudo                         | cert-options-925217          | jenkins | v1.35.0 | 07 Apr 25 13:42 UTC | 07 Apr 25 13:42 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-925217                                 | cert-options-925217          | jenkins | v1.35.0 | 07 Apr 25 13:42 UTC | 07 Apr 25 13:42 UTC |
	| start   | -p old-k8s-version-169187                              | old-k8s-version-169187       | jenkins | v1.35.0 | 07 Apr 25 13:42 UTC | 07 Apr 25 13:45 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=docker                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-687877                              | cert-expiration-687877       | jenkins | v1.35.0 | 07 Apr 25 13:44 UTC | 07 Apr 25 13:45 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=docker                             |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-687877                              | cert-expiration-687877       | jenkins | v1.35.0 | 07 Apr 25 13:45 UTC | 07 Apr 25 13:45 UTC |
	| start   | -p                                                     | default-k8s-diff-port-872084 | jenkins | v1.35.0 | 07 Apr 25 13:45 UTC | 07 Apr 25 13:46 UTC |
	|         | default-k8s-diff-port-872084                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=docker                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-169187        | old-k8s-version-169187       | jenkins | v1.35.0 | 07 Apr 25 13:45 UTC | 07 Apr 25 13:45 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-169187                              | old-k8s-version-169187       | jenkins | v1.35.0 | 07 Apr 25 13:45 UTC | 07 Apr 25 13:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-169187             | old-k8s-version-169187       | jenkins | v1.35.0 | 07 Apr 25 13:45 UTC | 07 Apr 25 13:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-169187                              | old-k8s-version-169187       | jenkins | v1.35.0 | 07 Apr 25 13:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=docker                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-872084  | default-k8s-diff-port-872084 | jenkins | v1.35.0 | 07 Apr 25 13:46 UTC | 07 Apr 25 13:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-872084 | jenkins | v1.35.0 | 07 Apr 25 13:46 UTC | 07 Apr 25 13:46 UTC |
	|         | default-k8s-diff-port-872084                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-872084       | default-k8s-diff-port-872084 | jenkins | v1.35.0 | 07 Apr 25 13:46 UTC | 07 Apr 25 13:46 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-872084 | jenkins | v1.35.0 | 07 Apr 25 13:46 UTC | 07 Apr 25 13:51 UTC |
	|         | default-k8s-diff-port-872084                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=docker                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-872084                           | default-k8s-diff-port-872084 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-872084 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | default-k8s-diff-port-872084                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-872084 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | default-k8s-diff-port-872084                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-872084 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | default-k8s-diff-port-872084                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-872084 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | default-k8s-diff-port-872084                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-690840                                  | embed-certs-690840           | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         |  --container-runtime=docker                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 13:51:36
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 13:51:36.857946 1834508 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:51:36.858439 1834508 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:51:36.858477 1834508 out.go:358] Setting ErrFile to fd 2...
	I0407 13:51:36.858499 1834508 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:51:36.858795 1834508 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-1489638/.minikube/bin
	I0407 13:51:36.859255 1834508 out.go:352] Setting JSON to false
	I0407 13:51:36.860348 1834508 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":27245,"bootTime":1744006652,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0407 13:51:36.860458 1834508 start.go:139] virtualization:  
	I0407 13:51:36.864378 1834508 out.go:177] * [embed-certs-690840] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0407 13:51:36.868591 1834508 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 13:51:36.868733 1834508 notify.go:220] Checking for updates...
	I0407 13:51:36.872805 1834508 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 13:51:36.875866 1834508 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20598-1489638/kubeconfig
	I0407 13:51:36.878980 1834508 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-1489638/.minikube
	I0407 13:51:36.881850 1834508 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0407 13:51:36.884793 1834508 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 13:51:36.888375 1834508 config.go:182] Loaded profile config "old-k8s-version-169187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0407 13:51:36.888489 1834508 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 13:51:36.913091 1834508 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0407 13:51:36.913209 1834508 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 13:51:36.971613 1834508 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-04-07 13:51:36.961703143 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0407 13:51:36.971725 1834508 docker.go:318] overlay module found
	I0407 13:51:36.974953 1834508 out.go:177] * Using the docker driver based on user configuration
	I0407 13:51:36.977895 1834508 start.go:297] selected driver: docker
	I0407 13:51:36.977914 1834508 start.go:901] validating driver "docker" against <nil>
	I0407 13:51:36.977929 1834508 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 13:51:36.978654 1834508 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 13:51:37.044856 1834508 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-04-07 13:51:37.030742658 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0407 13:51:37.045006 1834508 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0407 13:51:37.045245 1834508 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 13:51:37.048977 1834508 out.go:177] * Using Docker driver with root privileges
	I0407 13:51:37.051875 1834508 cni.go:84] Creating CNI manager for ""
	I0407 13:51:37.051959 1834508 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0407 13:51:37.051973 1834508 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0407 13:51:37.052056 1834508 start.go:340] cluster config:
	{Name:embed-certs-690840 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-690840 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:51:37.055154 1834508 out.go:177] * Starting "embed-certs-690840" primary control-plane node in "embed-certs-690840" cluster
	I0407 13:51:37.057976 1834508 cache.go:121] Beginning downloading kic base image for docker with docker
	I0407 13:51:37.060926 1834508 out.go:177] * Pulling base image v0.0.46-1743675393-20591 ...
	I0407 13:51:37.063825 1834508 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0407 13:51:37.063886 1834508 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20598-1489638/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-arm64.tar.lz4
	I0407 13:51:37.063899 1834508 cache.go:56] Caching tarball of preloaded images
	I0407 13:51:37.063924 1834508 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local docker daemon
	I0407 13:51:37.064014 1834508 preload.go:172] Found /home/jenkins/minikube-integration/20598-1489638/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0407 13:51:37.064025 1834508 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0407 13:51:37.064146 1834508 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/embed-certs-690840/config.json ...
	I0407 13:51:37.064177 1834508 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/embed-certs-690840/config.json: {Name:mkf128e7c0f140aadfda249a3ce6b29741225e11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:51:37.083303 1834508 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local docker daemon, skipping pull
	I0407 13:51:37.083326 1834508 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 exists in daemon, skipping load
	I0407 13:51:37.083344 1834508 cache.go:230] Successfully downloaded all kic artifacts
	I0407 13:51:37.083373 1834508 start.go:360] acquireMachinesLock for embed-certs-690840: {Name:mk78a25da2d634e43a1d98409ffb7d56e161fa1d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 13:51:37.084125 1834508 start.go:364] duration metric: took 730.325µs to acquireMachinesLock for "embed-certs-690840"
	I0407 13:51:37.084163 1834508 start.go:93] Provisioning new machine with config: &{Name:embed-certs-690840 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-690840 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0407 13:51:37.084238 1834508 start.go:125] createHost starting for "" (driver="docker")
	I0407 13:51:37.087569 1834508 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0407 13:51:37.087833 1834508 start.go:159] libmachine.API.Create for "embed-certs-690840" (driver="docker")
	I0407 13:51:37.087871 1834508 client.go:168] LocalClient.Create starting
	I0407 13:51:37.087935 1834508 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20598-1489638/.minikube/certs/ca.pem
	I0407 13:51:37.087978 1834508 main.go:141] libmachine: Decoding PEM data...
	I0407 13:51:37.087995 1834508 main.go:141] libmachine: Parsing certificate...
	I0407 13:51:37.088054 1834508 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20598-1489638/.minikube/certs/cert.pem
	I0407 13:51:37.088077 1834508 main.go:141] libmachine: Decoding PEM data...
	I0407 13:51:37.088090 1834508 main.go:141] libmachine: Parsing certificate...
	I0407 13:51:37.088444 1834508 cli_runner.go:164] Run: docker network inspect embed-certs-690840 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0407 13:51:37.104834 1834508 cli_runner.go:211] docker network inspect embed-certs-690840 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0407 13:51:37.104923 1834508 network_create.go:284] running [docker network inspect embed-certs-690840] to gather additional debugging logs...
	I0407 13:51:37.104944 1834508 cli_runner.go:164] Run: docker network inspect embed-certs-690840
	W0407 13:51:37.120532 1834508 cli_runner.go:211] docker network inspect embed-certs-690840 returned with exit code 1
	I0407 13:51:37.120579 1834508 network_create.go:287] error running [docker network inspect embed-certs-690840]: docker network inspect embed-certs-690840: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-690840 not found
	I0407 13:51:37.120593 1834508 network_create.go:289] output of [docker network inspect embed-certs-690840]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-690840 not found
	
	** /stderr **
	I0407 13:51:37.120810 1834508 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0407 13:51:37.138571 1834508 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-cb68a24093bb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:32:02:6c:69:0b:7a} reservation:<nil>}
	I0407 13:51:37.138981 1834508 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0e1fc9d3957e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:12:9f:61:c6:81:75} reservation:<nil>}
	I0407 13:51:37.139310 1834508 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9e29b45f042f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:32:1e:9c:a9:01:d7} reservation:<nil>}
	I0407 13:51:37.139633 1834508 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-760c8c32a067 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:1a:67:ff:89:49:26} reservation:<nil>}
	I0407 13:51:37.140057 1834508 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019de200}
	I0407 13:51:37.140082 1834508 network_create.go:124] attempt to create docker network embed-certs-690840 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0407 13:51:37.140142 1834508 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-690840 embed-certs-690840
	I0407 13:51:37.202907 1834508 network_create.go:108] docker network embed-certs-690840 192.168.85.0/24 created
	I0407 13:51:37.202941 1834508 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-690840" container
	I0407 13:51:37.203016 1834508 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0407 13:51:37.226184 1834508 cli_runner.go:164] Run: docker volume create embed-certs-690840 --label name.minikube.sigs.k8s.io=embed-certs-690840 --label created_by.minikube.sigs.k8s.io=true
	I0407 13:51:37.245634 1834508 oci.go:103] Successfully created a docker volume embed-certs-690840
	I0407 13:51:37.245719 1834508 cli_runner.go:164] Run: docker run --rm --name embed-certs-690840-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-690840 --entrypoint /usr/bin/test -v embed-certs-690840:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 -d /var/lib
	I0407 13:51:37.783807 1834508 oci.go:107] Successfully prepared a docker volume embed-certs-690840
	I0407 13:51:37.783857 1834508 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0407 13:51:37.783878 1834508 kic.go:194] Starting extracting preloaded images to volume ...
	I0407 13:51:37.783942 1834508 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20598-1489638/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-690840:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 -I lz4 -xf /preloaded.tar -C /extractDir
	I0407 13:51:41.596764 1834508 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20598-1489638/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-690840:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 -I lz4 -xf /preloaded.tar -C /extractDir: (3.812778481s)
	I0407 13:51:41.596807 1834508 kic.go:203] duration metric: took 3.812925148s to extract preloaded images to volume ...
	W0407 13:51:41.596949 1834508 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0407 13:51:41.597069 1834508 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0407 13:51:41.659736 1834508 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-690840 --name embed-certs-690840 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-690840 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-690840 --network embed-certs-690840 --ip 192.168.85.2 --volume embed-certs-690840:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727
	I0407 13:51:40.157136 1819972 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:51:40.169890 1819972 api_server.go:72] duration metric: took 5m53.648300907s to wait for apiserver process to appear ...
	I0407 13:51:40.169915 1819972 api_server.go:88] waiting for apiserver healthz status ...
	I0407 13:51:40.170006 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0407 13:51:40.192333 1819972 logs.go:282] 2 containers: [82525be035b3 78f8992ce8b4]
	I0407 13:51:40.192413 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0407 13:51:40.216209 1819972 logs.go:282] 2 containers: [b45737d73f96 f4fcf1ba0dce]
	I0407 13:51:40.216295 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0407 13:51:40.239039 1819972 logs.go:282] 2 containers: [a2086baae207 d92117844997]
	I0407 13:51:40.239125 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0407 13:51:40.262354 1819972 logs.go:282] 2 containers: [fce53c7f2eb0 3a9781764312]
	I0407 13:51:40.262433 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0407 13:51:40.283816 1819972 logs.go:282] 2 containers: [062895b6a45a 7cb4581969c6]
	I0407 13:51:40.283903 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0407 13:51:40.305544 1819972 logs.go:282] 2 containers: [c2da54d5c256 3e48a853c03b]
	I0407 13:51:40.305632 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0407 13:51:40.326317 1819972 logs.go:282] 0 containers: []
	W0407 13:51:40.326339 1819972 logs.go:284] No container was found matching "kindnet"
	I0407 13:51:40.326394 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0407 13:51:40.346269 1819972 logs.go:282] 2 containers: [55bf8eb1ab94 fcbefe8497a0]
	I0407 13:51:40.346404 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0407 13:51:40.368221 1819972 logs.go:282] 1 containers: [c66d59ac00e0]
	I0407 13:51:40.368255 1819972 logs.go:123] Gathering logs for etcd [b45737d73f96] ...
	I0407 13:51:40.368267 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b45737d73f96"
	I0407 13:51:40.405468 1819972 logs.go:123] Gathering logs for coredns [a2086baae207] ...
	I0407 13:51:40.405507 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2086baae207"
	I0407 13:51:40.429912 1819972 logs.go:123] Gathering logs for kube-scheduler [3a9781764312] ...
	I0407 13:51:40.429941 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9781764312"
	I0407 13:51:40.463734 1819972 logs.go:123] Gathering logs for kube-proxy [7cb4581969c6] ...
	I0407 13:51:40.463768 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cb4581969c6"
	I0407 13:51:40.490415 1819972 logs.go:123] Gathering logs for kubernetes-dashboard [c66d59ac00e0] ...
	I0407 13:51:40.490443 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66d59ac00e0"
	I0407 13:51:40.524475 1819972 logs.go:123] Gathering logs for kubelet ...
	I0407 13:51:40.524504 1819972 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0407 13:51:40.585432 1819972 logs.go:138] Found kubelet problem: Apr 07 13:45:59 old-k8s-version-169187 kubelet[1477]: E0407 13:45:59.905405    1477 reflector.go:138] object-"default"/"default-token-n6f2l": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-n6f2l" is forbidden: User "system:node:old-k8s-version-169187" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-169187' and this object
	W0407 13:51:40.585693 1819972 logs.go:138] Found kubelet problem: Apr 07 13:45:59 old-k8s-version-169187 kubelet[1477]: E0407 13:45:59.905477    1477 reflector.go:138] object-"kube-system"/"kube-proxy-token-lqq9d": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-lqq9d" is forbidden: User "system:node:old-k8s-version-169187" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-169187' and this object
	W0407 13:51:40.585903 1819972 logs.go:138] Found kubelet problem: Apr 07 13:45:59 old-k8s-version-169187 kubelet[1477]: E0407 13:45:59.905533    1477 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-169187" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-169187' and this object
	W0407 13:51:40.586131 1819972 logs.go:138] Found kubelet problem: Apr 07 13:45:59 old-k8s-version-169187 kubelet[1477]: E0407 13:45:59.905580    1477 reflector.go:138] object-"kube-system"/"storage-provisioner-token-cxdxc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-cxdxc" is forbidden: User "system:node:old-k8s-version-169187" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-169187' and this object
	W0407 13:51:40.586341 1819972 logs.go:138] Found kubelet problem: Apr 07 13:45:59 old-k8s-version-169187 kubelet[1477]: E0407 13:45:59.905695    1477 reflector.go:138] object-"kube-system"/"coredns-token-tttv5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-tttv5" is forbidden: User "system:node:old-k8s-version-169187" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-169187' and this object
	W0407 13:51:40.586541 1819972 logs.go:138] Found kubelet problem: Apr 07 13:45:59 old-k8s-version-169187 kubelet[1477]: E0407 13:45:59.906374    1477 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-169187" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-169187' and this object
	W0407 13:51:40.593243 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:01 old-k8s-version-169187 kubelet[1477]: E0407 13:46:01.360618    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0407 13:51:40.593907 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:02 old-k8s-version-169187 kubelet[1477]: E0407 13:46:02.348740    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.594420 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:03 old-k8s-version-169187 kubelet[1477]: E0407 13:46:03.359622    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.596854 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:15 old-k8s-version-169187 kubelet[1477]: E0407 13:46:15.664641    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0407 13:51:40.601421 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:20 old-k8s-version-169187 kubelet[1477]: E0407 13:46:20.047413    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0407 13:51:40.601983 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:20 old-k8s-version-169187 kubelet[1477]: E0407 13:46:20.686892    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.602183 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:21 old-k8s-version-169187 kubelet[1477]: E0407 13:46:21.704280    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.602545 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:26 old-k8s-version-169187 kubelet[1477]: E0407 13:46:26.633503    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.603191 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:32 old-k8s-version-169187 kubelet[1477]: E0407 13:46:32.852394    1477 pod_workers.go:191] Error syncing pod 799a1ac5-a9e9-4fd4-b152-afc0c2012231 ("storage-provisioner_kube-system(799a1ac5-a9e9-4fd4-b152-afc0c2012231)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(799a1ac5-a9e9-4fd4-b152-afc0c2012231)"
	W0407 13:51:40.605554 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:34 old-k8s-version-169187 kubelet[1477]: E0407 13:46:34.079950    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0407 13:51:40.607980 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:40 old-k8s-version-169187 kubelet[1477]: E0407 13:46:40.651036    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0407 13:51:40.608182 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:45 old-k8s-version-169187 kubelet[1477]: E0407 13:46:45.642017    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.608499 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:52 old-k8s-version-169187 kubelet[1477]: E0407 13:46:52.633525    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.610729 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:00 old-k8s-version-169187 kubelet[1477]: E0407 13:47:00.133509    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0407 13:51:40.610915 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:05 old-k8s-version-169187 kubelet[1477]: E0407 13:47:05.658773    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.611111 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:14 old-k8s-version-169187 kubelet[1477]: E0407 13:47:14.633956    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.611295 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:19 old-k8s-version-169187 kubelet[1477]: E0407 13:47:19.638120    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.611490 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:28 old-k8s-version-169187 kubelet[1477]: E0407 13:47:28.644719    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.613571 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:34 old-k8s-version-169187 kubelet[1477]: E0407 13:47:34.650220    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0407 13:51:40.615805 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:43 old-k8s-version-169187 kubelet[1477]: E0407 13:47:43.179657    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0407 13:51:40.615992 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:47 old-k8s-version-169187 kubelet[1477]: E0407 13:47:47.633825    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.616188 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:54 old-k8s-version-169187 kubelet[1477]: E0407 13:47:54.633728    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.616374 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:58 old-k8s-version-169187 kubelet[1477]: E0407 13:47:58.633722    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.616569 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:07 old-k8s-version-169187 kubelet[1477]: E0407 13:48:07.636662    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.616749 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:09 old-k8s-version-169187 kubelet[1477]: E0407 13:48:09.636763    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.616941 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:22 old-k8s-version-169187 kubelet[1477]: E0407 13:48:22.633343    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.617122 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:24 old-k8s-version-169187 kubelet[1477]: E0407 13:48:24.633503    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.617319 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:35 old-k8s-version-169187 kubelet[1477]: E0407 13:48:35.643057    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.617507 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:38 old-k8s-version-169187 kubelet[1477]: E0407 13:48:38.633424    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.617702 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:47 old-k8s-version-169187 kubelet[1477]: E0407 13:48:47.633680    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.617885 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:53 old-k8s-version-169187 kubelet[1477]: E0407 13:48:53.633745    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.618081 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:02 old-k8s-version-169187 kubelet[1477]: E0407 13:49:02.633346    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.620175 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:05 old-k8s-version-169187 kubelet[1477]: E0407 13:49:05.657505    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0407 13:51:40.620361 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:17 old-k8s-version-169187 kubelet[1477]: E0407 13:49:17.634169    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.622592 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:18 old-k8s-version-169187 kubelet[1477]: E0407 13:49:18.080057    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0407 13:51:40.622790 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:29 old-k8s-version-169187 kubelet[1477]: E0407 13:49:29.638278    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.622975 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:30 old-k8s-version-169187 kubelet[1477]: E0407 13:49:30.638576    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.623170 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:41 old-k8s-version-169187 kubelet[1477]: E0407 13:49:41.654838    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.623354 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:43 old-k8s-version-169187 kubelet[1477]: E0407 13:49:43.636263    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.623557 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:56 old-k8s-version-169187 kubelet[1477]: E0407 13:49:56.633358    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.623742 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:57 old-k8s-version-169187 kubelet[1477]: E0407 13:49:57.633386    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.623944 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:07 old-k8s-version-169187 kubelet[1477]: E0407 13:50:07.638473    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.624128 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:12 old-k8s-version-169187 kubelet[1477]: E0407 13:50:12.633385    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.624326 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:22 old-k8s-version-169187 kubelet[1477]: E0407 13:50:22.633440    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.624510 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:25 old-k8s-version-169187 kubelet[1477]: E0407 13:50:25.633558    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.624729 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:37 old-k8s-version-169187 kubelet[1477]: E0407 13:50:37.636391    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.624919 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:39 old-k8s-version-169187 kubelet[1477]: E0407 13:50:39.634229    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.625102 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:51 old-k8s-version-169187 kubelet[1477]: E0407 13:50:51.633542    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.625298 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:51 old-k8s-version-169187 kubelet[1477]: E0407 13:50:51.643349    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.625497 1819972 logs.go:138] Found kubelet problem: Apr 07 13:51:03 old-k8s-version-169187 kubelet[1477]: E0407 13:51:03.636916    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.625681 1819972 logs.go:138] Found kubelet problem: Apr 07 13:51:06 old-k8s-version-169187 kubelet[1477]: E0407 13:51:06.633346    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.625877 1819972 logs.go:138] Found kubelet problem: Apr 07 13:51:17 old-k8s-version-169187 kubelet[1477]: E0407 13:51:17.638234    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.626063 1819972 logs.go:138] Found kubelet problem: Apr 07 13:51:19 old-k8s-version-169187 kubelet[1477]: E0407 13:51:19.643952    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.626260 1819972 logs.go:138] Found kubelet problem: Apr 07 13:51:32 old-k8s-version-169187 kubelet[1477]: E0407 13:51:32.635754    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:40.626442 1819972 logs.go:138] Found kubelet problem: Apr 07 13:51:33 old-k8s-version-169187 kubelet[1477]: E0407 13:51:33.635745    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0407 13:51:40.626453 1819972 logs.go:123] Gathering logs for kube-apiserver [82525be035b3] ...
	I0407 13:51:40.626467 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82525be035b3"
	I0407 13:51:40.692053 1819972 logs.go:123] Gathering logs for etcd [f4fcf1ba0dce] ...
	I0407 13:51:40.692095 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4fcf1ba0dce"
	I0407 13:51:40.730914 1819972 logs.go:123] Gathering logs for kube-scheduler [fce53c7f2eb0] ...
	I0407 13:51:40.731004 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fce53c7f2eb0"
	I0407 13:51:40.756163 1819972 logs.go:123] Gathering logs for kube-proxy [062895b6a45a] ...
	I0407 13:51:40.756192 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 062895b6a45a"
	I0407 13:51:40.779888 1819972 logs.go:123] Gathering logs for container status ...
	I0407 13:51:40.779915 1819972 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:51:40.830573 1819972 logs.go:123] Gathering logs for kube-controller-manager [c2da54d5c256] ...
	I0407 13:51:40.830603 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2da54d5c256"
	I0407 13:51:40.883151 1819972 logs.go:123] Gathering logs for dmesg ...
	I0407 13:51:40.883193 1819972 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:51:40.899720 1819972 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:51:40.899749 1819972 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0407 13:51:41.056544 1819972 logs.go:123] Gathering logs for coredns [d92117844997] ...
	I0407 13:51:41.056575 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d92117844997"
	I0407 13:51:41.081183 1819972 logs.go:123] Gathering logs for storage-provisioner [55bf8eb1ab94] ...
	I0407 13:51:41.081212 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55bf8eb1ab94"
	I0407 13:51:41.104294 1819972 logs.go:123] Gathering logs for storage-provisioner [fcbefe8497a0] ...
	I0407 13:51:41.104324 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcbefe8497a0"
	I0407 13:51:41.126590 1819972 logs.go:123] Gathering logs for Docker ...
	I0407 13:51:41.126619 1819972 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0407 13:51:41.152068 1819972 logs.go:123] Gathering logs for kube-controller-manager [3e48a853c03b] ...
	I0407 13:51:41.152100 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e48a853c03b"
	I0407 13:51:41.190557 1819972 logs.go:123] Gathering logs for kube-apiserver [78f8992ce8b4] ...
	I0407 13:51:41.190633 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78f8992ce8b4"
	I0407 13:51:41.268535 1819972 out.go:358] Setting ErrFile to fd 2...
	I0407 13:51:41.268567 1819972 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0407 13:51:41.268629 1819972 out.go:270] X Problems detected in kubelet:
	W0407 13:51:41.268642 1819972 out.go:270]   Apr 07 13:51:06 old-k8s-version-169187 kubelet[1477]: E0407 13:51:06.633346    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:41.268652 1819972 out.go:270]   Apr 07 13:51:17 old-k8s-version-169187 kubelet[1477]: E0407 13:51:17.638234    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:41.268659 1819972 out.go:270]   Apr 07 13:51:19 old-k8s-version-169187 kubelet[1477]: E0407 13:51:19.643952    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0407 13:51:41.268664 1819972 out.go:270]   Apr 07 13:51:32 old-k8s-version-169187 kubelet[1477]: E0407 13:51:32.635754    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0407 13:51:41.268672 1819972 out.go:270]   Apr 07 13:51:33 old-k8s-version-169187 kubelet[1477]: E0407 13:51:33.635745    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0407 13:51:41.268683 1819972 out.go:358] Setting ErrFile to fd 2...
	I0407 13:51:41.268688 1819972 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:51:41.988687 1834508 cli_runner.go:164] Run: docker container inspect embed-certs-690840 --format={{.State.Running}}
	I0407 13:51:42.015807 1834508 cli_runner.go:164] Run: docker container inspect embed-certs-690840 --format={{.State.Status}}
	I0407 13:51:42.044166 1834508 cli_runner.go:164] Run: docker exec embed-certs-690840 stat /var/lib/dpkg/alternatives/iptables
	I0407 13:51:42.113468 1834508 oci.go:144] the created container "embed-certs-690840" has a running status.
	I0407 13:51:42.113524 1834508 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20598-1489638/.minikube/machines/embed-certs-690840/id_rsa...
	I0407 13:51:42.689651 1834508 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20598-1489638/.minikube/machines/embed-certs-690840/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0407 13:51:42.726679 1834508 cli_runner.go:164] Run: docker container inspect embed-certs-690840 --format={{.State.Status}}
	I0407 13:51:42.753377 1834508 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0407 13:51:42.753400 1834508 kic_runner.go:114] Args: [docker exec --privileged embed-certs-690840 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0407 13:51:42.836637 1834508 cli_runner.go:164] Run: docker container inspect embed-certs-690840 --format={{.State.Status}}
	I0407 13:51:42.860749 1834508 machine.go:93] provisionDockerMachine start ...
	I0407 13:51:42.860864 1834508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-690840
	I0407 13:51:42.893205 1834508 main.go:141] libmachine: Using SSH client type: native
	I0407 13:51:42.893552 1834508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 34621 <nil> <nil>}
	I0407 13:51:42.893578 1834508 main.go:141] libmachine: About to run SSH command:
	hostname
	I0407 13:51:42.894239 1834508 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51458->127.0.0.1:34621: read: connection reset by peer
	I0407 13:51:46.023245 1834508 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-690840
	
	I0407 13:51:46.023277 1834508 ubuntu.go:169] provisioning hostname "embed-certs-690840"
	I0407 13:51:46.023362 1834508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-690840
	I0407 13:51:46.041782 1834508 main.go:141] libmachine: Using SSH client type: native
	I0407 13:51:46.042108 1834508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 34621 <nil> <nil>}
	I0407 13:51:46.042125 1834508 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-690840 && echo "embed-certs-690840" | sudo tee /etc/hostname
	I0407 13:51:46.177786 1834508 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-690840
	
	I0407 13:51:46.177901 1834508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-690840
	I0407 13:51:46.196245 1834508 main.go:141] libmachine: Using SSH client type: native
	I0407 13:51:46.196563 1834508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 34621 <nil> <nil>}
	I0407 13:51:46.196585 1834508 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-690840' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-690840/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-690840' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 13:51:46.319428 1834508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 13:51:46.319527 1834508 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20598-1489638/.minikube CaCertPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20598-1489638/.minikube}
	I0407 13:51:46.319558 1834508 ubuntu.go:177] setting up certificates
	I0407 13:51:46.319567 1834508 provision.go:84] configureAuth start
	I0407 13:51:46.319634 1834508 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-690840
	I0407 13:51:46.336598 1834508 provision.go:143] copyHostCerts
	I0407 13:51:46.336668 1834508 exec_runner.go:144] found /home/jenkins/minikube-integration/20598-1489638/.minikube/ca.pem, removing ...
	I0407 13:51:46.336680 1834508 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20598-1489638/.minikube/ca.pem
	I0407 13:51:46.336757 1834508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1489638/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20598-1489638/.minikube/ca.pem (1082 bytes)
	I0407 13:51:46.336852 1834508 exec_runner.go:144] found /home/jenkins/minikube-integration/20598-1489638/.minikube/cert.pem, removing ...
	I0407 13:51:46.336862 1834508 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20598-1489638/.minikube/cert.pem
	I0407 13:51:46.336888 1834508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1489638/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20598-1489638/.minikube/cert.pem (1123 bytes)
	I0407 13:51:46.336956 1834508 exec_runner.go:144] found /home/jenkins/minikube-integration/20598-1489638/.minikube/key.pem, removing ...
	I0407 13:51:46.336966 1834508 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20598-1489638/.minikube/key.pem
	I0407 13:51:46.336990 1834508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1489638/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20598-1489638/.minikube/key.pem (1675 bytes)
	I0407 13:51:46.337044 1834508 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20598-1489638/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20598-1489638/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20598-1489638/.minikube/certs/ca-key.pem org=jenkins.embed-certs-690840 san=[127.0.0.1 192.168.85.2 embed-certs-690840 localhost minikube]
	I0407 13:51:46.744624 1834508 provision.go:177] copyRemoteCerts
	I0407 13:51:46.744705 1834508 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 13:51:46.744749 1834508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-690840
	I0407 13:51:46.773496 1834508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34621 SSHKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/embed-certs-690840/id_rsa Username:docker}
	I0407 13:51:46.865119 1834508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-1489638/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0407 13:51:46.889757 1834508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-1489638/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0407 13:51:46.914203 1834508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-1489638/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0407 13:51:46.938174 1834508 provision.go:87] duration metric: took 618.592761ms to configureAuth
	I0407 13:51:46.938199 1834508 ubuntu.go:193] setting minikube options for container-runtime
	I0407 13:51:46.938379 1834508 config.go:182] Loaded profile config "embed-certs-690840": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 13:51:46.938438 1834508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-690840
	I0407 13:51:46.955841 1834508 main.go:141] libmachine: Using SSH client type: native
	I0407 13:51:46.956220 1834508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 34621 <nil> <nil>}
	I0407 13:51:46.956237 1834508 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0407 13:51:47.080060 1834508 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0407 13:51:47.080083 1834508 ubuntu.go:71] root file system type: overlay
	I0407 13:51:47.080187 1834508 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0407 13:51:47.080255 1834508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-690840
	I0407 13:51:47.097482 1834508 main.go:141] libmachine: Using SSH client type: native
	I0407 13:51:47.097789 1834508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 34621 <nil> <nil>}
	I0407 13:51:47.097879 1834508 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0407 13:51:47.237844 1834508 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0407 13:51:47.237935 1834508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-690840
	I0407 13:51:47.256513 1834508 main.go:141] libmachine: Using SSH client type: native
	I0407 13:51:47.256826 1834508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 34621 <nil> <nil>}
	I0407 13:51:47.256849 1834508 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0407 13:51:48.123126 1834508 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-03-25 15:05:41.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-04-07 13:51:47.233409961 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0407 13:51:48.123158 1834508 machine.go:96] duration metric: took 5.262387408s to provisionDockerMachine
	I0407 13:51:48.123171 1834508 client.go:171] duration metric: took 11.03528956s to LocalClient.Create
	I0407 13:51:48.123185 1834508 start.go:167] duration metric: took 11.035352576s to libmachine.API.Create "embed-certs-690840"
	I0407 13:51:48.123192 1834508 start.go:293] postStartSetup for "embed-certs-690840" (driver="docker")
	I0407 13:51:48.123203 1834508 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 13:51:48.123268 1834508 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 13:51:48.123313 1834508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-690840
	I0407 13:51:48.140892 1834508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34621 SSHKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/embed-certs-690840/id_rsa Username:docker}
	I0407 13:51:48.232828 1834508 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 13:51:48.236214 1834508 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0407 13:51:48.236247 1834508 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0407 13:51:48.236258 1834508 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0407 13:51:48.236266 1834508 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0407 13:51:48.236276 1834508 filesync.go:126] Scanning /home/jenkins/minikube-integration/20598-1489638/.minikube/addons for local assets ...
	I0407 13:51:48.236331 1834508 filesync.go:126] Scanning /home/jenkins/minikube-integration/20598-1489638/.minikube/files for local assets ...
	I0407 13:51:48.236431 1834508 filesync.go:149] local asset: /home/jenkins/minikube-integration/20598-1489638/.minikube/files/etc/ssl/certs/14950262.pem -> 14950262.pem in /etc/ssl/certs
	I0407 13:51:48.236533 1834508 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0407 13:51:48.245781 1834508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-1489638/.minikube/files/etc/ssl/certs/14950262.pem --> /etc/ssl/certs/14950262.pem (1708 bytes)
	I0407 13:51:48.277455 1834508 start.go:296] duration metric: took 154.248187ms for postStartSetup
	I0407 13:51:48.277832 1834508 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-690840
	I0407 13:51:48.299718 1834508 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/embed-certs-690840/config.json ...
	I0407 13:51:48.299995 1834508 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 13:51:48.300052 1834508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-690840
	I0407 13:51:48.316255 1834508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34621 SSHKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/embed-certs-690840/id_rsa Username:docker}
	I0407 13:51:48.400391 1834508 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0407 13:51:48.404656 1834508 start.go:128] duration metric: took 11.320403511s to createHost
	I0407 13:51:48.404676 1834508 start.go:83] releasing machines lock for "embed-certs-690840", held for 11.320533834s
	I0407 13:51:48.404752 1834508 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-690840
	I0407 13:51:48.421749 1834508 ssh_runner.go:195] Run: cat /version.json
	I0407 13:51:48.421797 1834508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-690840
	I0407 13:51:48.422088 1834508 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0407 13:51:48.422135 1834508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-690840
	I0407 13:51:48.441200 1834508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34621 SSHKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/embed-certs-690840/id_rsa Username:docker}
	I0407 13:51:48.451604 1834508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34621 SSHKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/embed-certs-690840/id_rsa Username:docker}
	I0407 13:51:48.531051 1834508 ssh_runner.go:195] Run: systemctl --version
	I0407 13:51:48.672155 1834508 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0407 13:51:48.676900 1834508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0407 13:51:48.703443 1834508 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0407 13:51:48.703597 1834508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 13:51:48.735991 1834508 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0407 13:51:48.736028 1834508 start.go:495] detecting cgroup driver to use...
	I0407 13:51:48.736077 1834508 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0407 13:51:48.736192 1834508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 13:51:48.752678 1834508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0407 13:51:48.762269 1834508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0407 13:51:48.772369 1834508 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0407 13:51:48.772439 1834508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0407 13:51:48.788169 1834508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0407 13:51:48.800333 1834508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0407 13:51:48.810648 1834508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0407 13:51:48.821935 1834508 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 13:51:48.831226 1834508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0407 13:51:48.841722 1834508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0407 13:51:48.852827 1834508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0407 13:51:48.862735 1834508 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 13:51:48.872235 1834508 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 13:51:48.881115 1834508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:51:48.963257 1834508 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0407 13:51:49.059790 1834508 start.go:495] detecting cgroup driver to use...
	I0407 13:51:49.059890 1834508 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0407 13:51:49.059974 1834508 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0407 13:51:49.075333 1834508 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0407 13:51:49.075454 1834508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0407 13:51:49.090407 1834508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 13:51:49.109626 1834508 ssh_runner.go:195] Run: which cri-dockerd
	I0407 13:51:49.114302 1834508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0407 13:51:49.124543 1834508 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0407 13:51:49.152866 1834508 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0407 13:51:49.271842 1834508 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0407 13:51:49.380445 1834508 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0407 13:51:49.380625 1834508 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0407 13:51:49.405213 1834508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:51:49.528544 1834508 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0407 13:51:49.904057 1834508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0407 13:51:49.915785 1834508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0407 13:51:49.928116 1834508 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0407 13:51:50.018287 1834508 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0407 13:51:50.107109 1834508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:51:50.202681 1834508 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0407 13:51:50.222968 1834508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0407 13:51:50.236078 1834508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:51:50.330088 1834508 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0407 13:51:50.408194 1834508 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0407 13:51:50.408281 1834508 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0407 13:51:50.413339 1834508 start.go:563] Will wait 60s for crictl version
	I0407 13:51:50.413400 1834508 ssh_runner.go:195] Run: which crictl
	I0407 13:51:50.417311 1834508 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0407 13:51:50.461468 1834508 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.0.4
	RuntimeApiVersion:  v1
	I0407 13:51:50.461554 1834508 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0407 13:51:50.490147 1834508 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0407 13:51:51.270356 1819972 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0407 13:51:51.280041 1819972 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0407 13:51:51.283661 1819972 out.go:201] 
	W0407 13:51:51.286521 1819972 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0407 13:51:51.286680 1819972 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0407 13:51:51.286738 1819972 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0407 13:51:51.286781 1819972 out.go:270] * 
	W0407 13:51:51.287792 1819972 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0407 13:51:51.291196 1819972 out.go:201] 
	I0407 13:51:50.521765 1834508 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 28.0.4 ...
	I0407 13:51:50.521892 1834508 cli_runner.go:164] Run: docker network inspect embed-certs-690840 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0407 13:51:50.538490 1834508 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0407 13:51:50.542589 1834508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:51:50.554334 1834508 kubeadm.go:883] updating cluster {Name:embed-certs-690840 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-690840 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0407 13:51:50.554453 1834508 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0407 13:51:50.554512 1834508 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0407 13:51:50.575863 1834508 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.2
	registry.k8s.io/kube-controller-manager:v1.32.2
	registry.k8s.io/kube-scheduler:v1.32.2
	registry.k8s.io/kube-proxy:v1.32.2
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0407 13:51:50.575887 1834508 docker.go:619] Images already preloaded, skipping extraction
	I0407 13:51:50.575950 1834508 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0407 13:51:50.602011 1834508 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.2
	registry.k8s.io/kube-controller-manager:v1.32.2
	registry.k8s.io/kube-scheduler:v1.32.2
	registry.k8s.io/kube-proxy:v1.32.2
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0407 13:51:50.602038 1834508 cache_images.go:84] Images are preloaded, skipping loading
	I0407 13:51:50.602047 1834508 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.32.2 docker true true} ...
	I0407 13:51:50.602174 1834508 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-690840 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:embed-certs-690840 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0407 13:51:50.602250 1834508 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0407 13:51:50.675069 1834508 cni.go:84] Creating CNI manager for ""
	I0407 13:51:50.675097 1834508 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0407 13:51:50.675110 1834508 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0407 13:51:50.675130 1834508 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-690840 NodeName:embed-certs-690840 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0407 13:51:50.675272 1834508 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "embed-certs-690840"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0407 13:51:50.675344 1834508 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0407 13:51:50.685885 1834508 binaries.go:44] Found k8s binaries, skipping transfer
	I0407 13:51:50.685991 1834508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0407 13:51:50.694683 1834508 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0407 13:51:50.718310 1834508 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 13:51:50.742122 1834508 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2296 bytes)
	I0407 13:51:50.761102 1834508 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0407 13:51:50.764634 1834508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:51:50.775383 1834508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:51:50.878393 1834508 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 13:51:50.894081 1834508 certs.go:68] Setting up /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/embed-certs-690840 for IP: 192.168.85.2
	I0407 13:51:50.894192 1834508 certs.go:194] generating shared ca certs ...
	I0407 13:51:50.894225 1834508 certs.go:226] acquiring lock for ca certs: {Name:mk03ca927c02de3344f72431a7d9f1cc9d84ee54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:51:50.894467 1834508 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20598-1489638/.minikube/ca.key
	I0407 13:51:50.894540 1834508 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20598-1489638/.minikube/proxy-client-ca.key
	I0407 13:51:50.894563 1834508 certs.go:256] generating profile certs ...
	I0407 13:51:50.894641 1834508 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/embed-certs-690840/client.key
	I0407 13:51:50.894684 1834508 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/embed-certs-690840/client.crt with IP's: []
	
	
	==> Docker <==
	Apr 07 13:46:34 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:46:34.076242216Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Apr 07 13:46:40 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:46:40.646328871Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Apr 07 13:46:40 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:46:40.646379193Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Apr 07 13:46:40 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:46:40.649982240Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Apr 07 13:46:59 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:46:59.910801577Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Apr 07 13:47:00 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:47:00.123348215Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Apr 07 13:47:00 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:47:00.123812512Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Apr 07 13:47:00 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:47:00.123979807Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Apr 07 13:47:34 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:47:34.645147638Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Apr 07 13:47:34 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:47:34.645212795Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Apr 07 13:47:34 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:47:34.648327922Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Apr 07 13:47:42 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:47:42.870300316Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Apr 07 13:47:43 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:47:43.175025072Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Apr 07 13:47:43 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:47:43.175257704Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Apr 07 13:47:43 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:47:43.175570157Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Apr 07 13:49:05 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:49:05.653125401Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Apr 07 13:49:05 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:49:05.653568340Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Apr 07 13:49:05 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:49:05.656297317Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Apr 07 13:49:17 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:49:17.878074972Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Apr 07 13:49:18 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:49:18.076831734Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Apr 07 13:49:18 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:49:18.076967382Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Apr 07 13:49:18 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:49:18.076995525Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Apr 07 13:51:48 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:51:48.655679304Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Apr 07 13:51:48 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:51:48.655723932Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Apr 07 13:51:48 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:51:48.658294715Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	55bf8eb1ab94e       ba04bb24b9575                                                                                         5 minutes ago       Running             storage-provisioner       2                   14d2f02ae8bd4       storage-provisioner
	c66d59ac00e0b       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93        5 minutes ago       Running             kubernetes-dashboard      0                   799d3dcff5b2c       kubernetes-dashboard-cd95d586-jg6t2
	062895b6a45a0       25a5233254979                                                                                         5 minutes ago       Running             kube-proxy                1                   d84281d3791ed       kube-proxy-d8l5m
	a2086baae207e       db91994f4ee8f                                                                                         5 minutes ago       Running             coredns                   1                   aa9b9941055fa       coredns-74ff55c5b-zpflr
	fcbefe8497a0e       ba04bb24b9575                                                                                         5 minutes ago       Exited              storage-provisioner       1                   14d2f02ae8bd4       storage-provisioner
	1a93436156b67       1611cd07b61d5                                                                                         5 minutes ago       Running             busybox                   1                   049954938d6d1       busybox
	b45737d73f96c       05b738aa1bc63                                                                                         6 minutes ago       Running             etcd                      1                   3ff441c73916e       etcd-old-k8s-version-169187
	82525be035b3a       2c08bbbc02d3a                                                                                         6 minutes ago       Running             kube-apiserver            1                   7a5f6d532d87d       kube-apiserver-old-k8s-version-169187
	c2da54d5c2562       1df8a2b116bd1                                                                                         6 minutes ago       Running             kube-controller-manager   1                   53776d315032b       kube-controller-manager-old-k8s-version-169187
	fce53c7f2eb00       e7605f88f17d6                                                                                         6 minutes ago       Running             kube-scheduler            1                   c864d4867092a       kube-scheduler-old-k8s-version-169187
	d784bb64a479f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   6 minutes ago       Exited              busybox                   0                   9a5e41c91a18a       busybox
	7cb4581969c6d       25a5233254979                                                                                         7 minutes ago       Exited              kube-proxy                0                   efd72198fd173       kube-proxy-d8l5m
	d921178449970       db91994f4ee8f                                                                                         7 minutes ago       Exited              coredns                   0                   1d46ce11bd830       coredns-74ff55c5b-zpflr
	3a9781764312d       e7605f88f17d6                                                                                         8 minutes ago       Exited              kube-scheduler            0                   d67f1517233cd       kube-scheduler-old-k8s-version-169187
	3e48a853c03b2       1df8a2b116bd1                                                                                         8 minutes ago       Exited              kube-controller-manager   0                   b7c50e47771f9       kube-controller-manager-old-k8s-version-169187
	78f8992ce8b47       2c08bbbc02d3a                                                                                         8 minutes ago       Exited              kube-apiserver            0                   4d27dc9900160       kube-apiserver-old-k8s-version-169187
	f4fcf1ba0dcec       05b738aa1bc63                                                                                         8 minutes ago       Exited              etcd                      0                   f9cd1cb383006       etcd-old-k8s-version-169187
	
	
	==> coredns [a2086baae207] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:42712 - 40788 "HINFO IN 4832343868683583306.6690640880769357123. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012427877s
	
	
	==> coredns [d92117844997] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	[INFO] Reloading complete
	[INFO] 127.0.0.1:37029 - 57083 "HINFO IN 3878060877044781526.7005613683257069956. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.006985932s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	I0407 13:44:29.262142       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-04-07 13:43:59.261467255 +0000 UTC m=+0.077365174) (total time: 30.0005641s):
	Trace[2019727887]: [30.0005641s] [30.0005641s] END
	E0407 13:44:29.262410       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0407 13:44:29.262609       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-04-07 13:43:59.262141078 +0000 UTC m=+0.078038997) (total time: 30.00044954s):
	Trace[939984059]: [30.00044954s] [30.00044954s] END
	E0407 13:44:29.262631       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0407 13:44:29.262903       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-04-07 13:43:59.262377854 +0000 UTC m=+0.078275781) (total time: 30.000509265s):
	Trace[911902081]: [30.000509265s] [30.000509265s] END
	E0407 13:44:29.262918       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	E0407 13:45:26.200752       1 reflector.go:382] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=567&timeout=6m5s&timeoutSeconds=365&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
	E0407 13:45:26.200799       1 reflector.go:382] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to watch *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?allowWatchBookmarks=true&resourceVersion=577&timeout=8m23s&timeoutSeconds=503&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
	E0407 13:45:26.200828       1 reflector.go:382] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=200&timeout=6m27s&timeoutSeconds=387&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> describe nodes <==
	Name:               old-k8s-version-169187
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-169187
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277
	                    minikube.k8s.io/name=old-k8s-version-169187
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_07T13_43_42_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Apr 2025 13:43:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-169187
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Apr 2025 13:51:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Apr 2025 13:51:52 +0000   Mon, 07 Apr 2025 13:43:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Apr 2025 13:51:52 +0000   Mon, 07 Apr 2025 13:43:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Apr 2025 13:51:52 +0000   Mon, 07 Apr 2025 13:43:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Apr 2025 13:51:52 +0000   Mon, 07 Apr 2025 13:43:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-169187
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 cad61de569f2475aba10f198e008898b
	  System UUID:                6c0f61c9-c57f-493d-acd2-69f3cc3403e1
	  Boot ID:                    234d79b0-ee5b-4f69-ac54-5d0498b7c1e5
	  Kernel Version:             5.15.0-1081-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.0.4
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m40s
	  kube-system                 coredns-74ff55c5b-zpflr                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     7m55s
	  kube-system                 etcd-old-k8s-version-169187                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m7s
	  kube-system                 kube-apiserver-old-k8s-version-169187             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m7s
	  kube-system                 kube-controller-manager-old-k8s-version-169187    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m7s
	  kube-system                 kube-proxy-d8l5m                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m55s
	  kube-system                 kube-scheduler-old-k8s-version-169187             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m7s
	  kube-system                 metrics-server-9975d5f86-7rkcc                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m27s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m53s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-8v7k4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m35s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-jg6t2               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (4%)  170Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m22s (x5 over 8m22s)  kubelet     Node old-k8s-version-169187 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m22s (x5 over 8m22s)  kubelet     Node old-k8s-version-169187 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m22s (x4 over 8m22s)  kubelet     Node old-k8s-version-169187 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m7s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m7s                   kubelet     Node old-k8s-version-169187 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m7s                   kubelet     Node old-k8s-version-169187 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m7s                   kubelet     Node old-k8s-version-169187 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m7s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m57s                  kubelet     Node old-k8s-version-169187 status is now: NodeReady
	  Normal  Starting                 7m54s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m4s                   kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m4s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m3s (x8 over 6m4s)    kubelet     Node old-k8s-version-169187 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m3s (x8 over 6m4s)    kubelet     Node old-k8s-version-169187 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m3s (x7 over 6m4s)    kubelet     Node old-k8s-version-169187 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m51s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Apr 7 13:06] systemd-journald[222]: Failed to send stream file descriptor to service manager: Connection refused
	
	
	==> etcd [b45737d73f96] <==
	2025-04-07 13:47:48.819323 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:47:58.819319 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:48:08.820510 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:48:18.819229 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:48:28.819254 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:48:38.819317 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:48:48.819133 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:48:58.819349 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:49:08.819226 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:49:18.819246 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:49:28.819361 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:49:38.819190 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:49:48.819449 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:49:58.819406 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:50:08.819207 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:50:18.819665 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:50:28.819229 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:50:38.819362 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:50:48.819186 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:50:58.819392 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:51:08.819248 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:51:18.819270 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:51:28.819220 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:51:38.819322 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:51:48.819307 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [f4fcf1ba0dce] <==
	raft2025/04/07 13:43:32 INFO: ea7e25599daad906 became leader at term 2
	raft2025/04/07 13:43:32 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2025-04-07 13:43:32.402201 I | etcdserver: setting up the initial cluster version to 3.4
	2025-04-07 13:43:32.403302 N | etcdserver/membership: set the initial cluster version to 3.4
	2025-04-07 13:43:32.403465 I | etcdserver/api: enabled capabilities for version 3.4
	2025-04-07 13:43:32.403574 I | etcdserver: published {Name:old-k8s-version-169187 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2025-04-07 13:43:32.403807 I | embed: ready to serve client requests
	2025-04-07 13:43:32.405178 I | embed: serving client requests on 192.168.76.2:2379
	2025-04-07 13:43:32.410946 I | embed: ready to serve client requests
	2025-04-07 13:43:32.412724 I | embed: serving client requests on 127.0.0.1:2379
	2025-04-07 13:43:46.633469 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:43:47.369969 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:43:57.369938 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:44:07.369745 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:44:17.369850 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:44:27.369806 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:44:37.369904 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:44:47.369787 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:44:57.369872 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:45:07.370072 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:45:17.369718 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-07 13:45:26.289935 N | pkg/osutil: received terminated signal, shutting down...
	WARNING: 2025/04/07 13:45:26 grpc: addrConn.createTransport failed to connect to {192.168.76.2:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 192.168.76.2:2379: connect: connection refused". Reconnecting...
	WARNING: 2025/04/07 13:45:26 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	2025-04-07 13:45:26.354672 I | etcdserver: skipped leadership transfer for single voting member cluster
	
	
	==> kernel <==
	 13:51:53 up  7:34,  0 users,  load average: 1.22, 1.90, 2.75
	Linux old-k8s-version-169187 5.15.0-1081-aws #88~20.04.1-Ubuntu SMP Fri Mar 28 14:48:25 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [78f8992ce8b4] <==
	W0407 13:45:26.331063       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0407 13:45:26.331098       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	I0407 13:45:26.331448       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0407 13:45:26.331578       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0407 13:45:26.332669       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0407 13:45:26.332760       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0407 13:45:26.332906       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0407 13:45:26.332994       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	W0407 13:45:26.333076       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0407 13:45:26.333115       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0407 13:45:26.333150       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0407 13:45:26.333177       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0407 13:45:26.333210       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0407 13:45:26.333239       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0407 13:45:26.333268       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0407 13:45:26.333297       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0407 13:45:26.333326       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0407 13:45:26.333353       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0407 13:45:26.333379       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0407 13:45:26.333406       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0407 13:45:26.333437       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0407 13:45:26.333472       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	I0407 13:45:26.333520       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0407 13:45:26.344355       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	W0407 13:45:26.344535       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	
	==> kube-apiserver [82525be035b3] <==
	I0407 13:48:22.144316       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0407 13:48:22.144326       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0407 13:48:56.794902       1 client.go:360] parsed scheme: "passthrough"
	I0407 13:48:56.794960       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0407 13:48:56.795058       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0407 13:49:02.902880       1 handler_proxy.go:102] no RequestInfo found in the context
	E0407 13:49:02.902955       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0407 13:49:02.902963       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0407 13:49:37.482447       1 client.go:360] parsed scheme: "passthrough"
	I0407 13:49:37.482489       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0407 13:49:37.482498       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0407 13:50:16.293930       1 client.go:360] parsed scheme: "passthrough"
	I0407 13:50:16.293978       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0407 13:50:16.293987       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0407 13:50:46.894985       1 client.go:360] parsed scheme: "passthrough"
	I0407 13:50:46.895029       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0407 13:50:46.895255       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0407 13:51:00.939757       1 handler_proxy.go:102] no RequestInfo found in the context
	E0407 13:51:00.939984       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0407 13:51:00.940002       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0407 13:51:25.878254       1 client.go:360] parsed scheme: "passthrough"
	I0407 13:51:25.878300       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0407 13:51:25.878310       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [3e48a853c03b] <==
	I0407 13:43:58.134344       1 disruption.go:339] Sending events to api server.
	I0407 13:43:58.148473       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0407 13:43:58.149348       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0407 13:43:58.153996       1 shared_informer.go:247] Caches are synced for resource quota 
	I0407 13:43:58.155822       1 shared_informer.go:247] Caches are synced for stateful set 
	I0407 13:43:58.162693       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0407 13:43:58.166762       1 shared_informer.go:247] Caches are synced for deployment 
	I0407 13:43:58.167375       1 shared_informer.go:247] Caches are synced for endpoint 
	I0407 13:43:58.184364       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0407 13:43:58.191932       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-d8l5m"
	I0407 13:43:58.205378       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-gjv5d"
	I0407 13:43:58.240557       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-zpflr"
	I0407 13:43:58.299069       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0407 13:43:58.548057       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0407 13:43:58.548095       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0407 13:43:58.599557       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0407 13:44:00.079728       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0407 13:44:00.120484       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-gjv5d"
	I0407 13:45:24.973844       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	E0407 13:45:25.162165       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	I0407 13:45:26.097603       1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-9975d5f86-7rkcc"
	E0407 13:45:26.189580       1 request.go:1011] Unexpected error when reading response body: unexpected EOF
	W0407 13:45:26.189654       1 endpointslice_controller.go:284] Error syncing endpoint slices for service "kube-system/metrics-server", retrying. Error: failed to update metrics-server-trdqs EndpointSlice for Service kube-system/metrics-server: unexpected error when reading response body. Please retry. Original error: unexpected EOF
	I0407 13:45:26.189885       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Service" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpointSlices" message="Error updating Endpoint Slices for Service kube-system/metrics-server: failed to update metrics-server-trdqs EndpointSlice for Service kube-system/metrics-server: unexpected error when reading response body. Please retry. Original error: unexpected EOF"
	E0407 13:45:26.190032       1 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"metrics-server.18340d4074e7de8d", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"metrics-server", UID:"941a048e-4b42-4187-9153-b8fd69fcbf95", APIVersion:"v1", ResourceVersion:"567", FieldPath:""}, Reason:"FailedToUpdateEndpointSlices", Message:"Error updating Endpoint Slices for Service kube-system/metrics-server: failed to update metrics-server-trdqs EndpointSlice f
or Service kube-system/metrics-server: unexpected error when reading response body. Please retry. Original error: unexpected EOF", Source:v1.EventSource{Component:"endpoint-slice-controller", Host:""}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1f5139d8b4dc28d, ext:114096331347, loc:(*time.Location)(0x632eb80)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1f5139d8b4dc28d, ext:114096331347, loc:(*time.Location)(0x632eb80)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://192.168.76.2:8443/api/v1/namespaces/kube-system/events": dial tcp 192.168.76.2:8443: connect: connection refused'(may retry after sleeping)
	
	
	==> kube-controller-manager [c2da54d5c256] <==
	W0407 13:47:24.559664       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0407 13:47:50.486473       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0407 13:47:56.210215       1 request.go:655] Throttling request took 1.048375743s, request: GET:https://192.168.76.2:8443/apis/coordination.k8s.io/v1?timeout=32s
	W0407 13:47:57.061756       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0407 13:48:20.988246       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0407 13:48:28.712194       1 request.go:655] Throttling request took 1.048493439s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1beta1?timeout=32s
	W0407 13:48:29.563626       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0407 13:48:51.489981       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0407 13:49:01.214078       1 request.go:655] Throttling request took 1.043827034s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0407 13:49:02.065575       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0407 13:49:21.991838       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0407 13:49:33.716012       1 request.go:655] Throttling request took 1.048540801s, request: GET:https://192.168.76.2:8443/apis/storage.k8s.io/v1?timeout=32s
	W0407 13:49:34.567639       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0407 13:49:52.498121       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0407 13:50:06.218073       1 request.go:655] Throttling request took 1.048009912s, request: GET:https://192.168.76.2:8443/apis/authentication.k8s.io/v1beta1?timeout=32s
	W0407 13:50:07.069534       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0407 13:50:22.999946       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0407 13:50:38.720073       1 request.go:655] Throttling request took 1.048132225s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0407 13:50:39.572809       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0407 13:50:53.501680       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0407 13:51:11.223207       1 request.go:655] Throttling request took 1.048304111s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0407 13:51:12.074821       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0407 13:51:24.007369       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0407 13:51:43.725188       1 request.go:655] Throttling request took 1.048272481s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0407 13:51:44.577023       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-proxy [062895b6a45a] <==
	I0407 13:46:02.567998       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0407 13:46:02.568091       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0407 13:46:02.595809       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0407 13:46:02.595909       1 server_others.go:185] Using iptables Proxier.
	I0407 13:46:02.596121       1 server.go:650] Version: v1.20.0
	I0407 13:46:02.597183       1 config.go:315] Starting service config controller
	I0407 13:46:02.597200       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0407 13:46:02.597219       1 config.go:224] Starting endpoint slice config controller
	I0407 13:46:02.597223       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0407 13:46:02.697332       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0407 13:46:02.697332       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-proxy [7cb4581969c6] <==
	I0407 13:43:59.939460       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0407 13:43:59.939625       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0407 13:43:59.976384       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0407 13:43:59.976505       1 server_others.go:185] Using iptables Proxier.
	I0407 13:43:59.977026       1 server.go:650] Version: v1.20.0
	I0407 13:43:59.982108       1 config.go:315] Starting service config controller
	I0407 13:43:59.982134       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0407 13:43:59.982167       1 config.go:224] Starting endpoint slice config controller
	I0407 13:43:59.982171       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0407 13:44:00.090757       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0407 13:44:00.090824       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [3a9781764312] <==
	I0407 13:43:34.657699       1 serving.go:331] Generated self-signed cert in-memory
	W0407 13:43:39.470613       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0407 13:43:39.470843       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0407 13:43:39.470931       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0407 13:43:39.471009       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0407 13:43:39.548348       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0407 13:43:39.549511       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0407 13:43:39.549531       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0407 13:43:39.549546       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0407 13:43:39.558399       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0407 13:43:39.563243       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0407 13:43:39.563442       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0407 13:43:39.593783       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0407 13:43:39.598129       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0407 13:43:39.598220       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0407 13:43:39.598287       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0407 13:43:39.598377       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0407 13:43:39.598447       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0407 13:43:39.598512       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0407 13:43:39.598578       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0407 13:43:39.600804       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0407 13:43:40.551205       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0407 13:43:40.568065       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0407 13:43:41.049592       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [fce53c7f2eb0] <==
	I0407 13:45:55.391148       1 serving.go:331] Generated self-signed cert in-memory
	W0407 13:45:59.900496       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0407 13:45:59.900612       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0407 13:45:59.900642       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0407 13:45:59.900736       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0407 13:46:00.240667       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0407 13:46:00.263592       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0407 13:46:00.270095       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0407 13:46:00.270130       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0407 13:46:00.373017       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Apr 07 13:49:29 old-k8s-version-169187 kubelet[1477]: E0407 13:49:29.638278    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 07 13:49:30 old-k8s-version-169187 kubelet[1477]: E0407 13:49:30.638576    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 07 13:49:41 old-k8s-version-169187 kubelet[1477]: E0407 13:49:41.654838    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 07 13:49:43 old-k8s-version-169187 kubelet[1477]: E0407 13:49:43.636263    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 07 13:49:56 old-k8s-version-169187 kubelet[1477]: E0407 13:49:56.633358    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 07 13:49:57 old-k8s-version-169187 kubelet[1477]: E0407 13:49:57.633386    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 07 13:50:07 old-k8s-version-169187 kubelet[1477]: E0407 13:50:07.638473    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 07 13:50:12 old-k8s-version-169187 kubelet[1477]: E0407 13:50:12.633385    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 07 13:50:22 old-k8s-version-169187 kubelet[1477]: E0407 13:50:22.633440    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 07 13:50:25 old-k8s-version-169187 kubelet[1477]: E0407 13:50:25.633558    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 07 13:50:37 old-k8s-version-169187 kubelet[1477]: E0407 13:50:37.636391    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 07 13:50:39 old-k8s-version-169187 kubelet[1477]: E0407 13:50:39.634229    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 07 13:50:51 old-k8s-version-169187 kubelet[1477]: E0407 13:50:51.633542    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 07 13:50:51 old-k8s-version-169187 kubelet[1477]: E0407 13:50:51.643349    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 07 13:51:03 old-k8s-version-169187 kubelet[1477]: E0407 13:51:03.636916    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 07 13:51:06 old-k8s-version-169187 kubelet[1477]: E0407 13:51:06.633346    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 07 13:51:17 old-k8s-version-169187 kubelet[1477]: E0407 13:51:17.638234    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 07 13:51:19 old-k8s-version-169187 kubelet[1477]: E0407 13:51:19.643952    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 07 13:51:32 old-k8s-version-169187 kubelet[1477]: E0407 13:51:32.635754    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 07 13:51:33 old-k8s-version-169187 kubelet[1477]: E0407 13:51:33.635745    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 07 13:51:45 old-k8s-version-169187 kubelet[1477]: E0407 13:51:45.633252    1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 07 13:51:48 old-k8s-version-169187 kubelet[1477]: E0407 13:51:48.658846    1477 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Apr 07 13:51:48 old-k8s-version-169187 kubelet[1477]: E0407 13:51:48.658894    1477 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Apr 07 13:51:48 old-k8s-version-169187 kubelet[1477]: E0407 13:51:48.659031    1477 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-wkm2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exe
c:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-7rkcc_kube-system(62bc6e
bf-ed95-4175-9e2f-520cf4f10843): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Apr 07 13:51:48 old-k8s-version-169187 kubelet[1477]: E0407 13:51:48.659062    1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	
	
	==> kubernetes-dashboard [c66d59ac00e0] <==
	2025/04/07 13:46:25 Using namespace: kubernetes-dashboard
	2025/04/07 13:46:25 Using in-cluster config to connect to apiserver
	2025/04/07 13:46:25 Using secret token for csrf signing
	2025/04/07 13:46:25 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/04/07 13:46:25 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/04/07 13:46:25 Successful initial request to the apiserver, version: v1.20.0
	2025/04/07 13:46:25 Generating JWE encryption key
	2025/04/07 13:46:25 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/04/07 13:46:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/04/07 13:46:26 Initializing JWE encryption key from synchronized object
	2025/04/07 13:46:26 Creating in-cluster Sidecar client
	2025/04/07 13:46:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/04/07 13:46:26 Serving insecurely on HTTP port: 9090
	2025/04/07 13:46:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/04/07 13:47:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/04/07 13:47:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/04/07 13:48:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/04/07 13:48:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/04/07 13:49:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/04/07 13:49:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/04/07 13:50:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/04/07 13:50:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/04/07 13:51:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/04/07 13:46:25 Starting overwatch
	
	
	==> storage-provisioner [55bf8eb1ab94] <==
	I0407 13:46:46.771222       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0407 13:46:46.803835       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0407 13:46:46.804116       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0407 13:47:04.291868       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0407 13:47:04.294757       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-169187_68cc6a41-f96c-4ad4-b042-5afe60562cec!
	I0407 13:47:04.297213       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d9afc515-4e20-4942-a68b-b86c816b4262", APIVersion:"v1", ResourceVersion:"799", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-169187_68cc6a41-f96c-4ad4-b042-5afe60562cec became leader
	I0407 13:47:04.396153       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-169187_68cc6a41-f96c-4ad4-b042-5afe60562cec!
	
	
	==> storage-provisioner [fcbefe8497a0] <==
	I0407 13:46:02.084656       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0407 13:46:32.087032       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-169187 -n old-k8s-version-169187
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-169187 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-7rkcc dashboard-metrics-scraper-8d5bb5db8-8v7k4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-169187 describe pod metrics-server-9975d5f86-7rkcc dashboard-metrics-scraper-8d5bb5db8-8v7k4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-169187 describe pod metrics-server-9975d5f86-7rkcc dashboard-metrics-scraper-8d5bb5db8-8v7k4: exit status 1 (154.236881ms)

                                                
                                                
** stderr ** 
	E0407 13:51:55.039105 1837799 memcache.go:287] "Unhandled Error" err="couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request"
	E0407 13:51:55.068175 1837799 memcache.go:121] "Unhandled Error" err="couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request"
	E0407 13:51:55.073448 1837799 memcache.go:121] "Unhandled Error" err="couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request"
	E0407 13:51:55.077781 1837799 memcache.go:121] "Unhandled Error" err="couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request"
	Error from server (NotFound): pods "metrics-server-9975d5f86-7rkcc" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-8d5bb5db8-8v7k4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-169187 describe pod metrics-server-9975d5f86-7rkcc dashboard-metrics-scraper-8d5bb5db8-8v7k4: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (377.84s)

                                                
                                    

Test pass (319/346)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 9.21
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.22
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.32.2/json-events 5.28
13 TestDownloadOnly/v1.32.2/preload-exists 0
17 TestDownloadOnly/v1.32.2/LogsDuration 0.1
18 TestDownloadOnly/v1.32.2/DeleteAll 0.22
19 TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.6
22 TestOffline 88.35
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 223.08
29 TestAddons/serial/Volcano 42.91
31 TestAddons/serial/GCPAuth/Namespaces 0.18
32 TestAddons/serial/GCPAuth/FakeCredentials 9.92
35 TestAddons/parallel/Registry 17.86
36 TestAddons/parallel/Ingress 19.34
37 TestAddons/parallel/InspektorGadget 11.84
38 TestAddons/parallel/MetricsServer 6.74
40 TestAddons/parallel/CSI 49.02
41 TestAddons/parallel/Headlamp 17.7
42 TestAddons/parallel/CloudSpanner 6.57
43 TestAddons/parallel/LocalPath 52.93
44 TestAddons/parallel/NvidiaDevicePlugin 5.56
45 TestAddons/parallel/Yakd 11.72
47 TestAddons/StoppedEnableDisable 11.24
48 TestCertOptions 35.17
49 TestCertExpiration 246.96
50 TestDockerFlags 34.19
51 TestForceSystemdFlag 41.23
52 TestForceSystemdEnv 43.49
58 TestErrorSpam/setup 37.91
59 TestErrorSpam/start 0.79
60 TestErrorSpam/status 1.09
61 TestErrorSpam/pause 1.42
62 TestErrorSpam/unpause 1.5
63 TestErrorSpam/stop 1.41
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 43.6
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 35.3
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.39
75 TestFunctional/serial/CacheCmd/cache/add_local 1.03
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.61
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.18
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
83 TestFunctional/serial/ExtraConfig 45.03
84 TestFunctional/serial/ComponentHealth 0.11
85 TestFunctional/serial/LogsCmd 1.21
86 TestFunctional/serial/LogsFileCmd 1.24
87 TestFunctional/serial/InvalidService 4.98
89 TestFunctional/parallel/ConfigCmd 0.5
90 TestFunctional/parallel/DashboardCmd 15
91 TestFunctional/parallel/DryRun 0.46
92 TestFunctional/parallel/InternationalLanguage 0.19
93 TestFunctional/parallel/StatusCmd 1.09
97 TestFunctional/parallel/ServiceCmdConnect 12.67
98 TestFunctional/parallel/AddonsCmd 0.18
99 TestFunctional/parallel/PersistentVolumeClaim 26.9
101 TestFunctional/parallel/SSHCmd 0.72
102 TestFunctional/parallel/CpCmd 2.03
104 TestFunctional/parallel/FileSync 0.35
105 TestFunctional/parallel/CertSync 2.18
109 TestFunctional/parallel/NodeLabels 0.1
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.32
113 TestFunctional/parallel/License 0.24
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.57
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.34
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/ServiceCmd/DeployApp 6.23
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
127 TestFunctional/parallel/ProfileCmd/profile_list 0.51
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.51
129 TestFunctional/parallel/ServiceCmd/List 0.63
130 TestFunctional/parallel/MountCmd/any-port 8.4
131 TestFunctional/parallel/ServiceCmd/JSONOutput 0.68
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.54
133 TestFunctional/parallel/ServiceCmd/Format 0.48
134 TestFunctional/parallel/ServiceCmd/URL 0.46
135 TestFunctional/parallel/MountCmd/specific-port 2.33
136 TestFunctional/parallel/MountCmd/VerifyCleanup 1.86
137 TestFunctional/parallel/Version/short 0.07
138 TestFunctional/parallel/Version/components 1.19
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
143 TestFunctional/parallel/ImageCommands/ImageBuild 3.48
144 TestFunctional/parallel/ImageCommands/Setup 0.79
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.2
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.84
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.07
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.34
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.42
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.67
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.36
152 TestFunctional/parallel/DockerEnv/bash 1.4
153 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
154 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
155 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 128.23
164 TestMultiControlPlane/serial/DeployApp 43.4
165 TestMultiControlPlane/serial/PingHostFromPods 1.88
166 TestMultiControlPlane/serial/AddWorkerNode 26.01
167 TestMultiControlPlane/serial/NodeLabels 0.12
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.03
169 TestMultiControlPlane/serial/CopyFile 19.69
170 TestMultiControlPlane/serial/StopSecondaryNode 11.68
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.76
172 TestMultiControlPlane/serial/RestartSecondaryNode 39.18
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.56
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 294.55
175 TestMultiControlPlane/serial/DeleteSecondaryNode 11.17
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.78
177 TestMultiControlPlane/serial/StopCluster 32.82
178 TestMultiControlPlane/serial/RestartCluster 87.62
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.75
180 TestMultiControlPlane/serial/AddSecondaryNode 44.68
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.02
184 TestImageBuild/serial/Setup 32.65
185 TestImageBuild/serial/NormalBuild 1.89
186 TestImageBuild/serial/BuildWithBuildArg 1.02
187 TestImageBuild/serial/BuildWithDockerIgnore 0.8
188 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.71
192 TestJSONOutput/start/Command 43.04
193 TestJSONOutput/start/Audit 0
195 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/pause/Command 0.6
199 TestJSONOutput/pause/Audit 0
201 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
204 TestJSONOutput/unpause/Command 0.51
205 TestJSONOutput/unpause/Audit 0
207 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
208 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
210 TestJSONOutput/stop/Command 10.89
211 TestJSONOutput/stop/Audit 0
213 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
214 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
215 TestErrorJSONOutput 0.23
217 TestKicCustomNetwork/create_custom_network 31.02
218 TestKicCustomNetwork/use_default_bridge_network 31.86
219 TestKicExistingNetwork 31.04
220 TestKicCustomSubnet 36.69
221 TestKicStaticIP 32.66
222 TestMainNoArgs 0.05
223 TestMinikubeProfile 73.25
226 TestMountStart/serial/StartWithMountFirst 8.16
227 TestMountStart/serial/VerifyMountFirst 0.26
228 TestMountStart/serial/StartWithMountSecond 10.33
229 TestMountStart/serial/VerifyMountSecond 0.28
230 TestMountStart/serial/DeleteFirst 1.47
231 TestMountStart/serial/VerifyMountPostDelete 0.27
232 TestMountStart/serial/Stop 1.2
233 TestMountStart/serial/RestartStopped 8.04
234 TestMountStart/serial/VerifyMountPostStop 0.27
237 TestMultiNode/serial/FreshStart2Nodes 83.73
238 TestMultiNode/serial/DeployApp2Nodes 36.43
239 TestMultiNode/serial/PingHostFrom2Pods 1.04
240 TestMultiNode/serial/AddNode 16.4
241 TestMultiNode/serial/MultiNodeLabels 0.12
242 TestMultiNode/serial/ProfileList 0.76
243 TestMultiNode/serial/CopyFile 10.03
244 TestMultiNode/serial/StopNode 2.3
245 TestMultiNode/serial/StartAfterStop 11.51
246 TestMultiNode/serial/RestartKeepsNodes 85.19
247 TestMultiNode/serial/DeleteNode 5.31
248 TestMultiNode/serial/StopMultiNode 21.81
249 TestMultiNode/serial/RestartMultiNode 56.99
250 TestMultiNode/serial/ValidateNameConflict 33.9
255 TestPreload 140.09
257 TestScheduledStopUnix 107.02
258 TestSkaffold 118.36
260 TestInsufficientStorage 10.28
261 TestRunningBinaryUpgrade 77.79
263 TestKubernetesUpgrade 385.96
264 TestMissingContainerUpgrade 161.79
266 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
267 TestNoKubernetes/serial/StartWithK8s 39.75
268 TestNoKubernetes/serial/StartWithStopK8s 18.23
269 TestNoKubernetes/serial/Start 9.36
270 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
271 TestNoKubernetes/serial/ProfileList 1.03
272 TestNoKubernetes/serial/Stop 1.22
273 TestNoKubernetes/serial/StartNoArgs 7.35
274 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
275 TestStoppedBinaryUpgrade/Setup 0.79
276 TestStoppedBinaryUpgrade/Upgrade 83.27
277 TestStoppedBinaryUpgrade/MinikubeLogs 1.78
286 TestPause/serial/Start 79.58
287 TestPause/serial/SecondStartNoReconfiguration 31.44
299 TestPause/serial/Pause 0.74
300 TestPause/serial/VerifyStatus 0.41
301 TestPause/serial/Unpause 0.84
302 TestPause/serial/PauseAgain 1.07
303 TestPause/serial/DeletePaused 2.41
304 TestPause/serial/VerifyDeletedResources 0.19
306 TestStartStop/group/old-k8s-version/serial/FirstStart 138.57
308 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 79.57
309 TestStartStop/group/old-k8s-version/serial/DeployApp 10.64
310 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.45
311 TestStartStop/group/old-k8s-version/serial/Stop 11.33
312 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.32
314 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.38
315 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.15
316 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.84
317 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
318 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 266.25
319 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
320 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
321 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.21
322 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.94
324 TestStartStop/group/embed-certs/serial/FirstStart 56.65
325 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
326 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.19
327 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.29
328 TestStartStop/group/old-k8s-version/serial/Pause 4.04
330 TestStartStop/group/no-preload/serial/FirstStart 86.88
331 TestStartStop/group/embed-certs/serial/DeployApp 13.48
332 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.42
333 TestStartStop/group/embed-certs/serial/Stop 11.01
334 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
335 TestStartStop/group/embed-certs/serial/SecondStart 266.73
336 TestStartStop/group/no-preload/serial/DeployApp 11.39
337 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.18
338 TestStartStop/group/no-preload/serial/Stop 11.02
339 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
340 TestStartStop/group/no-preload/serial/SecondStart 268
341 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
342 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.11
343 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
344 TestStartStop/group/embed-certs/serial/Pause 2.84
346 TestStartStop/group/newest-cni/serial/FirstStart 37.12
347 TestStartStop/group/newest-cni/serial/DeployApp 0
348 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.51
349 TestStartStop/group/newest-cni/serial/Stop 9.19
350 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.29
351 TestStartStop/group/newest-cni/serial/SecondStart 22.85
352 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
353 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.16
354 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.32
355 TestStartStop/group/no-preload/serial/Pause 4.21
356 TestNetworkPlugins/group/auto/Start 87.21
357 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
358 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
359 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.4
360 TestStartStop/group/newest-cni/serial/Pause 4.56
361 TestNetworkPlugins/group/kindnet/Start 73.8
362 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
363 TestNetworkPlugins/group/auto/KubeletFlags 0.32
364 TestNetworkPlugins/group/auto/NetCatPod 12.31
365 TestNetworkPlugins/group/kindnet/KubeletFlags 0.32
366 TestNetworkPlugins/group/kindnet/NetCatPod 11.3
367 TestNetworkPlugins/group/auto/DNS 0.22
368 TestNetworkPlugins/group/auto/Localhost 0.16
369 TestNetworkPlugins/group/auto/HairPin 0.18
370 TestNetworkPlugins/group/kindnet/DNS 0.19
371 TestNetworkPlugins/group/kindnet/Localhost 0.16
372 TestNetworkPlugins/group/kindnet/HairPin 0.16
373 TestNetworkPlugins/group/calico/Start 104.96
374 TestNetworkPlugins/group/custom-flannel/Start 58.3
375 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.41
376 TestNetworkPlugins/group/custom-flannel/NetCatPod 14.35
377 TestNetworkPlugins/group/custom-flannel/DNS 0.4
378 TestNetworkPlugins/group/custom-flannel/Localhost 0.3
379 TestNetworkPlugins/group/custom-flannel/HairPin 0.22
380 TestNetworkPlugins/group/false/Start 76.23
381 TestNetworkPlugins/group/calico/ControllerPod 6.01
382 TestNetworkPlugins/group/calico/KubeletFlags 0.32
383 TestNetworkPlugins/group/calico/NetCatPod 12.37
384 TestNetworkPlugins/group/calico/DNS 0.31
385 TestNetworkPlugins/group/calico/Localhost 0.22
386 TestNetworkPlugins/group/calico/HairPin 0.21
387 TestNetworkPlugins/group/enable-default-cni/Start 42.29
388 TestNetworkPlugins/group/false/KubeletFlags 0.33
389 TestNetworkPlugins/group/false/NetCatPod 11.33
390 TestNetworkPlugins/group/false/DNS 0.19
391 TestNetworkPlugins/group/false/Localhost 0.16
392 TestNetworkPlugins/group/false/HairPin 0.16
393 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
394 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.28
395 TestNetworkPlugins/group/enable-default-cni/DNS 0.28
396 TestNetworkPlugins/group/enable-default-cni/Localhost 0.23
397 TestNetworkPlugins/group/enable-default-cni/HairPin 0.31
398 TestNetworkPlugins/group/flannel/Start 64.88
399 TestNetworkPlugins/group/bridge/Start 83.37
400 TestNetworkPlugins/group/flannel/ControllerPod 6
401 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
402 TestNetworkPlugins/group/flannel/NetCatPod 10.31
403 TestNetworkPlugins/group/flannel/DNS 0.2
404 TestNetworkPlugins/group/flannel/Localhost 0.17
405 TestNetworkPlugins/group/flannel/HairPin 0.16
406 TestNetworkPlugins/group/bridge/KubeletFlags 0.44
407 TestNetworkPlugins/group/bridge/NetCatPod 12.35
408 TestNetworkPlugins/group/kubenet/Start 75.99
409 TestNetworkPlugins/group/bridge/DNS 0.24
410 TestNetworkPlugins/group/bridge/Localhost 0.21
411 TestNetworkPlugins/group/bridge/HairPin 0.2
412 TestNetworkPlugins/group/kubenet/KubeletFlags 0.3
413 TestNetworkPlugins/group/kubenet/NetCatPod 10.28
414 TestNetworkPlugins/group/kubenet/DNS 0.19
415 TestNetworkPlugins/group/kubenet/Localhost 0.15
416 TestNetworkPlugins/group/kubenet/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (9.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-378137 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-378137 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (9.21331533s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (9.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0407 12:50:17.406150 1495026 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0407 12:50:17.406225 1495026 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20598-1489638/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-378137
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-378137: exit status 85 (91.900852ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-378137 | jenkins | v1.35.0 | 07 Apr 25 12:50 UTC |          |
	|         | -p download-only-378137        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 12:50:08
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 12:50:08.238398 1495031 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:50:08.238591 1495031 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:50:08.238620 1495031 out.go:358] Setting ErrFile to fd 2...
	I0407 12:50:08.238640 1495031 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:50:08.238890 1495031 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-1489638/.minikube/bin
	W0407 12:50:08.239038 1495031 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20598-1489638/.minikube/config/config.json: open /home/jenkins/minikube-integration/20598-1489638/.minikube/config/config.json: no such file or directory
	I0407 12:50:08.239451 1495031 out.go:352] Setting JSON to true
	I0407 12:50:08.240320 1495031 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":23557,"bootTime":1744006652,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0407 12:50:08.240414 1495031 start.go:139] virtualization:  
	I0407 12:50:08.244619 1495031 out.go:97] [download-only-378137] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	W0407 12:50:08.244800 1495031 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20598-1489638/.minikube/cache/preloaded-tarball: no such file or directory
	I0407 12:50:08.244840 1495031 notify.go:220] Checking for updates...
	I0407 12:50:08.247773 1495031 out.go:169] MINIKUBE_LOCATION=20598
	I0407 12:50:08.250797 1495031 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:50:08.253794 1495031 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20598-1489638/kubeconfig
	I0407 12:50:08.256677 1495031 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-1489638/.minikube
	I0407 12:50:08.259600 1495031 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0407 12:50:08.265379 1495031 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0407 12:50:08.265671 1495031 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 12:50:08.287005 1495031 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0407 12:50:08.287111 1495031 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 12:50:08.351615 1495031 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:53 SystemTime:2025-04-07 12:50:08.342152535 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0407 12:50:08.351720 1495031 docker.go:318] overlay module found
	I0407 12:50:08.354721 1495031 out.go:97] Using the docker driver based on user configuration
	I0407 12:50:08.354761 1495031 start.go:297] selected driver: docker
	I0407 12:50:08.354776 1495031 start.go:901] validating driver "docker" against <nil>
	I0407 12:50:08.354891 1495031 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 12:50:08.412514 1495031 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:53 SystemTime:2025-04-07 12:50:08.403616052 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0407 12:50:08.412672 1495031 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0407 12:50:08.412967 1495031 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0407 12:50:08.413119 1495031 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0407 12:50:08.416441 1495031 out.go:169] Using Docker driver with root privileges
	I0407 12:50:08.419188 1495031 cni.go:84] Creating CNI manager for ""
	I0407 12:50:08.419264 1495031 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0407 12:50:08.419345 1495031 start.go:340] cluster config:
	{Name:download-only-378137 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-378137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:50:08.422340 1495031 out.go:97] Starting "download-only-378137" primary control-plane node in "download-only-378137" cluster
	I0407 12:50:08.422369 1495031 cache.go:121] Beginning downloading kic base image for docker with docker
	I0407 12:50:08.425167 1495031 out.go:97] Pulling base image v0.0.46-1743675393-20591 ...
	I0407 12:50:08.425201 1495031 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0407 12:50:08.425318 1495031 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local docker daemon
	I0407 12:50:08.441667 1495031 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 to local cache
	I0407 12:50:08.441861 1495031 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local cache directory
	I0407 12:50:08.441954 1495031 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 to local cache
	I0407 12:50:08.479062 1495031 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0407 12:50:08.479086 1495031 cache.go:56] Caching tarball of preloaded images
	I0407 12:50:08.479240 1495031 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0407 12:50:08.482635 1495031 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0407 12:50:08.482663 1495031 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0407 12:50:08.569486 1495031 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /home/jenkins/minikube-integration/20598-1489638/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-378137 host does not exist
	  To start a cluster, run: "minikube start -p download-only-378137"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-378137
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/json-events (5.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-013320 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-013320 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=docker --driver=docker  --container-runtime=docker: (5.27636473s)
--- PASS: TestDownloadOnly/v1.32.2/json-events (5.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/preload-exists
I0407 12:50:23.136693 1495026 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
I0407 12:50:23.136737 1495026 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20598-1489638/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-013320
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-013320: exit status 85 (96.045479ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-378137 | jenkins | v1.35.0 | 07 Apr 25 12:50 UTC |                     |
	|         | -p download-only-378137        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 07 Apr 25 12:50 UTC | 07 Apr 25 12:50 UTC |
	| delete  | -p download-only-378137        | download-only-378137 | jenkins | v1.35.0 | 07 Apr 25 12:50 UTC | 07 Apr 25 12:50 UTC |
	| start   | -o=json --download-only        | download-only-013320 | jenkins | v1.35.0 | 07 Apr 25 12:50 UTC |                     |
	|         | -p download-only-013320        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 12:50:17
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 12:50:17.908188 1495233 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:50:17.908308 1495233 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:50:17.908319 1495233 out.go:358] Setting ErrFile to fd 2...
	I0407 12:50:17.908324 1495233 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:50:17.908583 1495233 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-1489638/.minikube/bin
	I0407 12:50:17.908977 1495233 out.go:352] Setting JSON to true
	I0407 12:50:17.909843 1495233 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":23566,"bootTime":1744006652,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0407 12:50:17.909912 1495233 start.go:139] virtualization:  
	I0407 12:50:17.913406 1495233 out.go:97] [download-only-013320] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0407 12:50:17.913685 1495233 notify.go:220] Checking for updates...
	I0407 12:50:17.917327 1495233 out.go:169] MINIKUBE_LOCATION=20598
	I0407 12:50:17.920296 1495233 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:50:17.923070 1495233 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20598-1489638/kubeconfig
	I0407 12:50:17.925922 1495233 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-1489638/.minikube
	I0407 12:50:17.928786 1495233 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0407 12:50:17.934431 1495233 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0407 12:50:17.934668 1495233 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 12:50:17.965829 1495233 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0407 12:50:17.966060 1495233 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 12:50:18.026440 1495233 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:44 SystemTime:2025-04-07 12:50:18.016696023 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0407 12:50:18.026555 1495233 docker.go:318] overlay module found
	I0407 12:50:18.029519 1495233 out.go:97] Using the docker driver based on user configuration
	I0407 12:50:18.029559 1495233 start.go:297] selected driver: docker
	I0407 12:50:18.029571 1495233 start.go:901] validating driver "docker" against <nil>
	I0407 12:50:18.029677 1495233 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 12:50:18.094907 1495233 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:44 SystemTime:2025-04-07 12:50:18.086035367 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0407 12:50:18.095064 1495233 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0407 12:50:18.095361 1495233 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0407 12:50:18.095552 1495233 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0407 12:50:18.098604 1495233 out.go:169] Using Docker driver with root privileges
	I0407 12:50:18.101443 1495233 cni.go:84] Creating CNI manager for ""
	I0407 12:50:18.101535 1495233 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0407 12:50:18.101552 1495233 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0407 12:50:18.101648 1495233 start.go:340] cluster config:
	{Name:download-only-013320 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:download-only-013320 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:50:18.104731 1495233 out.go:97] Starting "download-only-013320" primary control-plane node in "download-only-013320" cluster
	I0407 12:50:18.104761 1495233 cache.go:121] Beginning downloading kic base image for docker with docker
	I0407 12:50:18.107668 1495233 out.go:97] Pulling base image v0.0.46-1743675393-20591 ...
	I0407 12:50:18.107719 1495233 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0407 12:50:18.107823 1495233 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local docker daemon
	I0407 12:50:18.124189 1495233 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 to local cache
	I0407 12:50:18.124345 1495233 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local cache directory
	I0407 12:50:18.124369 1495233 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local cache directory, skipping pull
	I0407 12:50:18.124377 1495233 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 exists in cache, skipping pull
	I0407 12:50:18.124384 1495233 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 as a tarball
	I0407 12:50:18.167435 1495233 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-arm64.tar.lz4
	I0407 12:50:18.167469 1495233 cache.go:56] Caching tarball of preloaded images
	I0407 12:50:18.168232 1495233 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0407 12:50:18.171291 1495233 out.go:97] Downloading Kubernetes v1.32.2 preload ...
	I0407 12:50:18.171317 1495233 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.2-docker-overlay2-arm64.tar.lz4 ...
	I0407 12:50:18.258363 1495233 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-arm64.tar.lz4?checksum=md5:0f214d8e9732f3a450da0811727c623c -> /home/jenkins/minikube-integration/20598-1489638/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-arm64.tar.lz4
	I0407 12:50:21.766305 1495233 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.2-docker-overlay2-arm64.tar.lz4 ...
	I0407 12:50:21.766414 1495233 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20598-1489638/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-arm64.tar.lz4 ...
	I0407 12:50:22.545527 1495233 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0407 12:50:22.545891 1495233 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/download-only-013320/config.json ...
	I0407 12:50:22.545925 1495233 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/download-only-013320/config.json: {Name:mke2c89ea747a4fe25d90d640cf41fe01c2177bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:50:22.546110 1495233 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0407 12:50:22.546278 1495233 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/20598-1489638/.minikube/cache/linux/arm64/v1.32.2/kubectl
	
	
	* The control-plane node download-only-013320 host does not exist
	  To start a cluster, run: "minikube start -p download-only-013320"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.2/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.32.2/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-013320
--- PASS: TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I0407 12:50:24.446706 1495026 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-332376 --alsologtostderr --binary-mirror http://127.0.0.1:33423 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-332376" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-332376
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestOffline (88.35s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-418728 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-418728 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m24.718327928s)
helpers_test.go:175: Cleaning up "offline-docker-418728" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-418728
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-418728: (3.629994977s)
--- PASS: TestOffline (88.35s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-378486
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-378486: exit status 85 (87.260904ms)

                                                
                                                
-- stdout --
	* Profile "addons-378486" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-378486"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-378486
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-378486: exit status 85 (73.308165ms)

                                                
                                                
-- stdout --
	* Profile "addons-378486" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-378486"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (223.08s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-378486 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-378486 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m43.077519009s)
--- PASS: TestAddons/Setup (223.08s)

                                                
                                    
x
+
TestAddons/serial/Volcano (42.91s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:823: volcano-controller stabilized in 68.788115ms
addons_test.go:815: volcano-admission stabilized in 70.322948ms
addons_test.go:807: volcano-scheduler stabilized in 70.910627ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-75fdd99bcf-qh764" [fae00b2f-8339-43ef-a666-b7a8662c1f4e] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003448522s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-75d8f6b5c-jq6s2" [5824511a-2581-43f9-a3af-514bb87141e7] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.00401056s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-86bdc5c9c-qswrn" [71306aec-1d27-4214-bc29-f91d91938f6e] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 6.004505621s
addons_test.go:842: (dbg) Run:  kubectl --context addons-378486 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-378486 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-378486 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [0b4fff76-9e71-4ab7-86de-41b78cdb13ab] Pending
helpers_test.go:344: "test-job-nginx-0" [0b4fff76-9e71-4ab7-86de-41b78cdb13ab] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [0b4fff76-9e71-4ab7-86de-41b78cdb13ab] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.003689357s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-378486 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-378486 addons disable volcano --alsologtostderr -v=1: (11.253220509s)
--- PASS: TestAddons/serial/Volcano (42.91s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-378486 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-378486 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.92s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-378486 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-378486 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [499bd216-4d62-4b66-a03a-03aac4f9d86a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [499bd216-4d62-4b66-a03a-03aac4f9d86a] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.005167461s
addons_test.go:633: (dbg) Run:  kubectl --context addons-378486 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-378486 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-378486 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-378486 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.92s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 6.880465ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-m4hn8" [fd837021-a3be-4392-a865-3d4912aa00b9] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005572016s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-9spwj" [c43ac4b7-3b80-4c9c-8589-c01cc380c126] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.005650964s
addons_test.go:331: (dbg) Run:  kubectl --context addons-378486 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-378486 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-378486 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.409751835s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-378486 ip
2025/04/07 12:55:26 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-378486 addons disable registry --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-378486 addons disable registry --alsologtostderr -v=1: (1.16313516s)
--- PASS: TestAddons/parallel/Registry (17.86s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-378486 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-378486 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-378486 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a274bf82-66f6-40b6-ac65-33921c226157] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a274bf82-66f6-40b6-ac65-33921c226157] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003440048s
I0407 12:56:48.631745 1495026 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-378486 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-378486 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-378486 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-378486 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-378486 addons disable ingress-dns --alsologtostderr -v=1: (1.075109533s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-378486 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-378486 addons disable ingress --alsologtostderr -v=1: (7.73592819s)
--- PASS: TestAddons/parallel/Ingress (19.34s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.84s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-25s2n" [4d17b9cc-5fd1-4282-8f38-3e3c0ceb0408] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00372311s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-378486 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-378486 addons disable inspektor-gadget --alsologtostderr -v=1: (5.840284624s)
--- PASS: TestAddons/parallel/InspektorGadget (11.84s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.74s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 3.435584ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-svn79" [134903ea-a176-4ff1-8570-b1cddbdd431c] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00369866s
addons_test.go:402: (dbg) Run:  kubectl --context addons-378486 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-378486 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.74s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.02s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0407 12:55:52.381203 1495026 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0407 12:55:52.385057 1495026 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0407 12:55:52.385084 1495026 kapi.go:107] duration metric: took 6.766502ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 6.776751ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-378486 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378486 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378486 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378486 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378486 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378486 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378486 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378486 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378486 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378486 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-378486 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [e522c466-b4d2-48de-8dc8-52a58a28ef98] Pending
helpers_test.go:344: "task-pv-pod" [e522c466-b4d2-48de-8dc8-52a58a28ef98] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [e522c466-b4d2-48de-8dc8-52a58a28ef98] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003290953s
addons_test.go:511: (dbg) Run:  kubectl --context addons-378486 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-378486 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-378486 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-378486 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-378486 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-378486 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378486 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-378486 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [3394f78d-58ba-485d-b360-bba37faa6354] Pending
helpers_test.go:344: "task-pv-pod-restore" [3394f78d-58ba-485d-b360-bba37faa6354] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [3394f78d-58ba-485d-b360-bba37faa6354] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003593254s
addons_test.go:553: (dbg) Run:  kubectl --context addons-378486 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-378486 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-378486 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-378486 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-378486 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-378486 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.833228906s)
--- PASS: TestAddons/parallel/CSI (49.02s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-378486 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-548lj" [93d3aedf-4f09-46dc-8f15-95dc0580f476] Pending
helpers_test.go:344: "headlamp-5d4b5d7bd6-548lj" [93d3aedf-4f09-46dc-8f15-95dc0580f476] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-548lj" [93d3aedf-4f09-46dc-8f15-95dc0580f476] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003838858s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-378486 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-378486 addons disable headlamp --alsologtostderr -v=1: (5.698918559s)
--- PASS: TestAddons/parallel/Headlamp (17.70s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-cc9755fc7-zpzn7" [74f7184b-8f18-45c4-a6e2-aa0115c46973] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003543981s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-378486 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.57s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.93s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-378486 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-378486 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378486 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378486 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378486 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378486 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-378486 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [765c7909-936e-4b5f-a725-5a1dd4893b95] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [765c7909-936e-4b5f-a725-5a1dd4893b95] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [765c7909-936e-4b5f-a725-5a1dd4893b95] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004235931s
addons_test.go:906: (dbg) Run:  kubectl --context addons-378486 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-arm64 -p addons-378486 ssh "cat /opt/local-path-provisioner/pvc-1693fc77-4006-421b-a0f5-9bc29a5c2969_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-378486 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-378486 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-378486 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-378486 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.542706763s)
--- PASS: TestAddons/parallel/LocalPath (52.93s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.56s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-b527v" [8c941b28-5585-42e8-b059-a904ee90de1f] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003519125s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-378486 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.56s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-9q26c" [14d585fe-d049-434f-8270-303e694851b8] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.002580996s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-378486 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-378486 addons disable yakd --alsologtostderr -v=1: (5.716971452s)
--- PASS: TestAddons/parallel/Yakd (11.72s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.24s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-378486
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-378486: (10.954377652s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-378486
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-378486
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-378486
--- PASS: TestAddons/StoppedEnableDisable (11.24s)

                                                
                                    
x
+
TestCertOptions (35.17s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-925217 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
E0407 13:42:24.308925 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/skaffold-943047/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-925217 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (32.404002082s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-925217 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-925217 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-925217 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-925217" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-925217
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-925217: (2.105388309s)
--- PASS: TestCertOptions (35.17s)

                                                
                                    
x
+
TestCertExpiration (246.96s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-687877 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-687877 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (40.075934441s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-687877 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-687877 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (24.668620462s)
helpers_test.go:175: Cleaning up "cert-expiration-687877" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-687877
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-687877: (2.217043163s)
--- PASS: TestCertExpiration (246.96s)

                                                
                                    
x
+
TestDockerFlags (34.19s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-055908 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0407 13:41:56.602156 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/skaffold-943047/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-055908 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (31.436267571s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-055908 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-055908 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-055908" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-055908
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-055908: (2.112776193s)
--- PASS: TestDockerFlags (34.19s)

                                                
                                    
x
+
TestForceSystemdFlag (41.23s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-124552 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0407 13:40:17.615299 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/functional-340022/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-124552 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (37.878960488s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-124552 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-124552" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-124552
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-124552: (2.797744712s)
--- PASS: TestForceSystemdFlag (41.23s)

                                                
                                    
x
+
TestForceSystemdEnv (43.49s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-383276 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-383276 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (40.347952373s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-383276 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-383276" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-383276
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-383276: (2.635005257s)
--- PASS: TestForceSystemdEnv (43.49s)

                                                
                                    
x
+
TestErrorSpam/setup (37.91s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-200129 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-200129 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-200129 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-200129 --driver=docker  --container-runtime=docker: (37.910816716s)
--- PASS: TestErrorSpam/setup (37.91s)

                                                
                                    
x
+
TestErrorSpam/start (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-200129 --log_dir /tmp/nospam-200129 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-200129 --log_dir /tmp/nospam-200129 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-200129 --log_dir /tmp/nospam-200129 start --dry-run
--- PASS: TestErrorSpam/start (0.79s)

                                                
                                    
x
+
TestErrorSpam/status (1.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-200129 --log_dir /tmp/nospam-200129 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-200129 --log_dir /tmp/nospam-200129 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-200129 --log_dir /tmp/nospam-200129 status
--- PASS: TestErrorSpam/status (1.09s)

                                                
                                    
x
+
TestErrorSpam/pause (1.42s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-200129 --log_dir /tmp/nospam-200129 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-200129 --log_dir /tmp/nospam-200129 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-200129 --log_dir /tmp/nospam-200129 pause
--- PASS: TestErrorSpam/pause (1.42s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-200129 --log_dir /tmp/nospam-200129 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-200129 --log_dir /tmp/nospam-200129 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-200129 --log_dir /tmp/nospam-200129 unpause
--- PASS: TestErrorSpam/unpause (1.50s)

                                                
                                    
x
+
TestErrorSpam/stop (1.41s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-200129 --log_dir /tmp/nospam-200129 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-200129 --log_dir /tmp/nospam-200129 stop: (1.197103363s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-200129 --log_dir /tmp/nospam-200129 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-200129 --log_dir /tmp/nospam-200129 stop
--- PASS: TestErrorSpam/stop (1.41s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20598-1489638/.minikube/files/etc/test/nested/copy/1495026/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (43.6s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-arm64 start -p functional-340022 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2251: (dbg) Done: out/minikube-linux-arm64 start -p functional-340022 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (43.588508771s)
--- PASS: TestFunctional/serial/StartWithProxy (43.60s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.3s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0407 12:58:41.978566 1495026 config.go:182] Loaded profile config "functional-340022": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
functional_test.go:676: (dbg) Run:  out/minikube-linux-arm64 start -p functional-340022 --alsologtostderr -v=8
E0407 12:59:08.260603 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/addons-378486/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:59:08.267656 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/addons-378486/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:59:08.278987 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/addons-378486/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:59:08.300413 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/addons-378486/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:59:08.341865 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/addons-378486/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:59:08.423371 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/addons-378486/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:59:08.584736 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/addons-378486/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:59:08.905987 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/addons-378486/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:59:09.547641 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/addons-378486/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:59:10.829726 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/addons-378486/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:59:13.391717 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/addons-378486/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:676: (dbg) Done: out/minikube-linux-arm64 start -p functional-340022 --alsologtostderr -v=8: (35.298773386s)
functional_test.go:680: soft start took 35.301555502s for "functional-340022" cluster.
I0407 12:59:17.277647 1495026 config.go:182] Loaded profile config "functional-340022": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/SoftStart (35.30s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-340022 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 cache add registry.k8s.io/pause:3.1
E0407 12:59:18.513667 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/addons-378486/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1066: (dbg) Done: out/minikube-linux-arm64 -p functional-340022 cache add registry.k8s.io/pause:3.1: (1.189871762s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-arm64 -p functional-340022 cache add registry.k8s.io/pause:3.3: (1.100914145s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 cache add registry.k8s.io/pause:latest
functional_test.go:1066: (dbg) Done: out/minikube-linux-arm64 -p functional-340022 cache add registry.k8s.io/pause:latest: (1.100175859s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-340022 /tmp/TestFunctionalserialCacheCmdcacheadd_local2831952810/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 cache add minikube-local-cache-test:functional-340022
functional_test.go:1111: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 cache delete minikube-local-cache-test:functional-340022
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-340022
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.61s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-340022 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (306.214701ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 cache reload
functional_test.go:1180: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.61s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 kubectl -- --context functional-340022 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-340022 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (45.03s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-arm64 start -p functional-340022 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0407 12:59:28.755017 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/addons-378486/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:59:49.236332 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/addons-378486/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-linux-arm64 start -p functional-340022 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (45.024978037s)
functional_test.go:778: restart took 45.025076836s for "functional-340022" cluster.
I0407 13:00:09.354392 1495026 config.go:182] Loaded profile config "functional-340022": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/ExtraConfig (45.03s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-340022 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-arm64 -p functional-340022 logs: (1.211099241s)
--- PASS: TestFunctional/serial/LogsCmd (1.21s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 logs --file /tmp/TestFunctionalserialLogsFileCmd3669356978/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-arm64 -p functional-340022 logs --file /tmp/TestFunctionalserialLogsFileCmd3669356978/001/logs.txt: (1.238140704s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.24s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.98s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-340022 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-340022
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-340022: exit status 115 (446.313419ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31235 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-340022 delete -f testdata/invalidsvc.yaml
functional_test.go:2344: (dbg) Done: kubectl --context functional-340022 delete -f testdata/invalidsvc.yaml: (1.215599998s)
--- PASS: TestFunctional/serial/InvalidService (4.98s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-340022 config get cpus: exit status 14 (86.410862ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-340022 config get cpus: exit status 14 (81.755688ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-340022 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-340022 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1536912: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-340022 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-340022 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (196.246849ms)

                                                
                                                
-- stdout --
	* [functional-340022] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20598-1489638/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-1489638/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 13:00:49.700669 1536575 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:00:49.700810 1536575 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:00:49.700816 1536575 out.go:358] Setting ErrFile to fd 2...
	I0407 13:00:49.700820 1536575 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:00:49.701296 1536575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-1489638/.minikube/bin
	I0407 13:00:49.702168 1536575 out.go:352] Setting JSON to false
	I0407 13:00:49.703373 1536575 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":24198,"bootTime":1744006652,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0407 13:00:49.703454 1536575 start.go:139] virtualization:  
	I0407 13:00:49.706733 1536575 out.go:177] * [functional-340022] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0407 13:00:49.710605 1536575 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 13:00:49.710747 1536575 notify.go:220] Checking for updates...
	I0407 13:00:49.717505 1536575 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 13:00:49.720448 1536575 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20598-1489638/kubeconfig
	I0407 13:00:49.723451 1536575 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-1489638/.minikube
	I0407 13:00:49.726411 1536575 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0407 13:00:49.732394 1536575 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 13:00:49.736466 1536575 config.go:182] Loaded profile config "functional-340022": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 13:00:49.737152 1536575 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 13:00:49.758556 1536575 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0407 13:00:49.758653 1536575 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 13:00:49.817745 1536575 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-04-07 13:00:49.808104597 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0407 13:00:49.817869 1536575 docker.go:318] overlay module found
	I0407 13:00:49.821079 1536575 out.go:177] * Using the docker driver based on existing profile
	I0407 13:00:49.824054 1536575 start.go:297] selected driver: docker
	I0407 13:00:49.824076 1536575 start.go:901] validating driver "docker" against &{Name:functional-340022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-340022 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:00:49.824243 1536575 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 13:00:49.828228 1536575 out.go:201] 
	W0407 13:00:49.831310 1536575 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0407 13:00:49.834262 1536575 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-arm64 start -p functional-340022 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 start -p functional-340022 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-340022 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (187.27412ms)

                                                
                                                
-- stdout --
	* [functional-340022] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20598-1489638/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-1489638/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 13:00:49.507958 1536528 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:00:49.508157 1536528 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:00:49.508189 1536528 out.go:358] Setting ErrFile to fd 2...
	I0407 13:00:49.508210 1536528 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:00:49.508673 1536528 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-1489638/.minikube/bin
	I0407 13:00:49.509112 1536528 out.go:352] Setting JSON to false
	I0407 13:00:49.510178 1536528 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":24198,"bootTime":1744006652,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0407 13:00:49.510283 1536528 start.go:139] virtualization:  
	I0407 13:00:49.514366 1536528 out.go:177] * [functional-340022] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	I0407 13:00:49.518201 1536528 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 13:00:49.518224 1536528 notify.go:220] Checking for updates...
	I0407 13:00:49.521242 1536528 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 13:00:49.524445 1536528 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20598-1489638/kubeconfig
	I0407 13:00:49.527623 1536528 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-1489638/.minikube
	I0407 13:00:49.530592 1536528 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0407 13:00:49.533450 1536528 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 13:00:49.536768 1536528 config.go:182] Loaded profile config "functional-340022": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 13:00:49.537392 1536528 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 13:00:49.562762 1536528 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0407 13:00:49.563590 1536528 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 13:00:49.621493 1536528 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-04-07 13:00:49.611746117 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0407 13:00:49.621608 1536528 docker.go:318] overlay module found
	I0407 13:00:49.624736 1536528 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0407 13:00:49.627609 1536528 start.go:297] selected driver: docker
	I0407 13:00:49.627632 1536528 start.go:901] validating driver "docker" against &{Name:functional-340022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-340022 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:00:49.627736 1536528 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 13:00:49.631358 1536528 out.go:201] 
	W0407 13:00:49.634232 1536528 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0407 13:00:49.637124 1536528 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1644: (dbg) Run:  kubectl --context functional-340022 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-340022 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-8449669db6-ngq85" [30186721-d4e9-423c-a9ef-f70828f105f5] Pending
helpers_test.go:344: "hello-node-connect-8449669db6-ngq85" [30186721-d4e9-423c-a9ef-f70828f105f5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-8449669db6-ngq85" [30186721-d4e9-423c-a9ef-f70828f105f5] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.003061511s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.49.2:31848
functional_test.go:1692: http://192.168.49.2:31848: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-8449669db6-ngq85

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31848
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.67s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [1c3b1f0e-a239-464f-aa2e-a176010c833b] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003819259s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-340022 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-340022 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-340022 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-340022 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [fd85c69d-7169-4496-9d38-20a456b2c657] Pending
helpers_test.go:344: "sp-pod" [fd85c69d-7169-4496-9d38-20a456b2c657] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [fd85c69d-7169-4496-9d38-20a456b2c657] Running
E0407 13:00:30.198178 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/addons-378486/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.006066809s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-340022 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-340022 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-340022 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [656873fb-ca0c-49e5-b9b5-f6c96a6fa534] Pending
helpers_test.go:344: "sp-pod" [656873fb-ca0c-49e5-b9b5-f6c96a6fa534] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [656873fb-ca0c-49e5-b9b5-f6c96a6fa534] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003374898s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-340022 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.90s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 ssh -n functional-340022 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 cp functional-340022:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3203263530/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 ssh -n functional-340022 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 ssh -n functional-340022 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/1495026/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 ssh "sudo cat /etc/test/nested/copy/1495026/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/1495026.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 ssh "sudo cat /etc/ssl/certs/1495026.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/1495026.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 ssh "sudo cat /usr/share/ca-certificates/1495026.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/14950262.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 ssh "sudo cat /etc/ssl/certs/14950262.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/14950262.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 ssh "sudo cat /usr/share/ca-certificates/14950262.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-340022 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
2025/04/07 13:01:04 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 ssh "sudo systemctl is-active crio"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-340022 ssh "sudo systemctl is-active crio": exit status 1 (323.211652ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-340022 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-340022 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-340022 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1533822: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-340022 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-340022 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-340022 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [2cecb1e9-8067-491a-98d3-d08a9967a61a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [2cecb1e9-8067-491a-98d3-d08a9967a61a] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.00381444s
I0407 13:00:26.300472 1495026 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-340022 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.20.82 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-340022 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1454: (dbg) Run:  kubectl --context functional-340022 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-340022 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64fc58db8c-2qtdt" [8761d15b-723c-4ee5-9934-201ca14539bb] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64fc58db8c-2qtdt" [8761d15b-723c-4ee5-9934-201ca14539bb] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.004511082s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1332: Took "448.520895ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1346: Took "60.872098ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1383: Took "437.225293ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1396: Took "76.138064ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-340022 /tmp/TestFunctionalparallelMountCmdany-port1210175572/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1744030845970935148" to /tmp/TestFunctionalparallelMountCmdany-port1210175572/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1744030845970935148" to /tmp/TestFunctionalparallelMountCmdany-port1210175572/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1744030845970935148" to /tmp/TestFunctionalparallelMountCmdany-port1210175572/001/test-1744030845970935148
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-340022 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (566.182417ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0407 13:00:46.538202 1495026 retry.go:31] will retry after 351.102231ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr  7 13:00 created-by-test
-rw-r--r-- 1 docker docker 24 Apr  7 13:00 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr  7 13:00 test-1744030845970935148
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 ssh cat /mount-9p/test-1744030845970935148
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-340022 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [cc7fca41-3253-4146-a385-d2cf5d9cd599] Pending
helpers_test.go:344: "busybox-mount" [cc7fca41-3253-4146-a385-d2cf5d9cd599] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [cc7fca41-3253-4146-a385-d2cf5d9cd599] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [cc7fca41-3253-4146-a385-d2cf5d9cd599] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004277466s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-340022 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-340022 /tmp/TestFunctionalparallelMountCmdany-port1210175572/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 service list -o json
functional_test.go:1511: Took "676.326256ms" to run "out/minikube-linux-arm64 -p functional-340022 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.49.2:31687
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.49.2:31687
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-340022 /tmp/TestFunctionalparallelMountCmdspecific-port2887744994/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-340022 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (425.40167ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0407 13:00:54.792835 1495026 retry.go:31] will retry after 622.767374ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-340022 /tmp/TestFunctionalparallelMountCmdspecific-port2887744994/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-340022 ssh "sudo umount -f /mount-9p": exit status 1 (358.2738ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-340022 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-340022 /tmp/TestFunctionalparallelMountCmdspecific-port2887744994/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-340022 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3626795275/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-340022 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3626795275/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-340022 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3626795275/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-340022 ssh "findmnt -T" /mount1: (1.050317006s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-340022 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-340022 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3626795275/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-340022 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3626795275/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-340022 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3626795275/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 version -o=json --components
functional_test.go:2287: (dbg) Done: out/minikube-linux-arm64 -p functional-340022 version -o=json --components: (1.18554821s)
--- PASS: TestFunctional/parallel/Version/components (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-340022 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.2
registry.k8s.io/kube-proxy:v1.32.2
registry.k8s.io/kube-controller-manager:v1.32.2
registry.k8s.io/kube-apiserver:v1.32.2
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-340022
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-340022
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-340022 image ls --format short --alsologtostderr:
I0407 13:01:08.429305 1539881 out.go:345] Setting OutFile to fd 1 ...
I0407 13:01:08.429529 1539881 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:01:08.429557 1539881 out.go:358] Setting ErrFile to fd 2...
I0407 13:01:08.429577 1539881 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:01:08.429868 1539881 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-1489638/.minikube/bin
I0407 13:01:08.430566 1539881 config.go:182] Loaded profile config "functional-340022": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 13:01:08.430776 1539881 config.go:182] Loaded profile config "functional-340022": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 13:01:08.431279 1539881 cli_runner.go:164] Run: docker container inspect functional-340022 --format={{.State.Status}}
I0407 13:01:08.449490 1539881 ssh_runner.go:195] Run: systemctl --version
I0407 13:01:08.449549 1539881 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-340022
I0407 13:01:08.468718 1539881 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34311 SSHKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/functional-340022/id_rsa Username:docker}
I0407 13:01:08.556137 1539881 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-340022 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-340022 | b9ef9e74c0d4d | 30B    |
| docker.io/kicbase/echo-server               | functional-340022 | ce2d2cda2d858 | 4.78MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| docker.io/library/nginx                     | latest            | 2c9168b3c9a84 | 197MB  |
| docker.io/library/nginx                     | alpine            | cedb667e1a7b4 | 49.4MB |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| registry.k8s.io/kube-apiserver              | v1.32.2           | 6417e1437b6d9 | 93.9MB |
| registry.k8s.io/etcd                        | 3.5.16-0          | 7fc9d4aa817aa | 142MB  |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/kube-controller-manager     | v1.32.2           | 3c9285acfd2ff | 87.2MB |
| registry.k8s.io/kube-scheduler              | v1.32.2           | 82dfa03f692fb | 67.9MB |
| registry.k8s.io/kube-proxy                  | v1.32.2           | e5aac5df76d9b | 97.1MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-340022 image ls --format table --alsologtostderr:
I0407 13:01:08.950556 1540039 out.go:345] Setting OutFile to fd 1 ...
I0407 13:01:08.950735 1540039 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:01:08.950762 1540039 out.go:358] Setting ErrFile to fd 2...
I0407 13:01:08.950780 1540039 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:01:08.951056 1540039 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-1489638/.minikube/bin
I0407 13:01:08.951740 1540039 config.go:182] Loaded profile config "functional-340022": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 13:01:08.951918 1540039 config.go:182] Loaded profile config "functional-340022": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 13:01:08.952413 1540039 cli_runner.go:164] Run: docker container inspect functional-340022 --format={{.State.Status}}
I0407 13:01:08.976274 1540039 ssh_runner.go:195] Run: systemctl --version
I0407 13:01:08.976329 1540039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-340022
I0407 13:01:09.001789 1540039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34311 SSHKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/functional-340022/id_rsa Username:docker}
I0407 13:01:09.100065 1540039 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-340022 image ls --format json --alsologtostderr:
[{"id":"e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.32.2"],"size":"97100000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"142000000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74c
c91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-340022"],"size":"4780000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.2"],"size":"93900000"},{"id":"3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.2"],"size":"87200000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests
":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"b9ef9e74c0d4d2755f0c460daac3cf4a1866743ce9fe5f76009f676627461d91","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-340022"],"size":"30"},{"id":"cedb667e1a7b4e6d843a4f74f1f2db0dac1c29b43978aa72dbae2193e3b8eea3","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"49400000"},{"id":"2c9168b3c9a84851f91e03534dc4136951e9f581ab3ac8ee38b28b49ad57ba38","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"197000000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.2"],"si
ze":"67900000"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-340022 image ls --format json --alsologtostderr:
I0407 13:01:08.666922 1539941 out.go:345] Setting OutFile to fd 1 ...
I0407 13:01:08.667058 1539941 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:01:08.667071 1539941 out.go:358] Setting ErrFile to fd 2...
I0407 13:01:08.667081 1539941 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:01:08.667378 1539941 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-1489638/.minikube/bin
I0407 13:01:08.668021 1539941 config.go:182] Loaded profile config "functional-340022": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 13:01:08.668138 1539941 config.go:182] Loaded profile config "functional-340022": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 13:01:08.668696 1539941 cli_runner.go:164] Run: docker container inspect functional-340022 --format={{.State.Status}}
I0407 13:01:08.712013 1539941 ssh_runner.go:195] Run: systemctl --version
I0407 13:01:08.712073 1539941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-340022
I0407 13:01:08.735769 1539941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34311 SSHKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/functional-340022/id_rsa Username:docker}
I0407 13:01:08.823928 1539941 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-340022 image ls --format yaml --alsologtostderr:
- id: b9ef9e74c0d4d2755f0c460daac3cf4a1866743ce9fe5f76009f676627461d91
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-340022
size: "30"
- id: 6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.2
size: "93900000"
- id: cedb667e1a7b4e6d843a4f74f1f2db0dac1c29b43978aa72dbae2193e3b8eea3
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "49400000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.2
size: "67900000"
- id: 7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "142000000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-340022
size: "4780000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.2
size: "87200000"
- id: e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.32.2
size: "97100000"
- id: 2c9168b3c9a84851f91e03534dc4136951e9f581ab3ac8ee38b28b49ad57ba38
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "197000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-340022 image ls --format yaml --alsologtostderr:
I0407 13:01:08.933071 1540032 out.go:345] Setting OutFile to fd 1 ...
I0407 13:01:08.933246 1540032 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:01:08.933257 1540032 out.go:358] Setting ErrFile to fd 2...
I0407 13:01:08.933262 1540032 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:01:08.933499 1540032 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-1489638/.minikube/bin
I0407 13:01:08.934125 1540032 config.go:182] Loaded profile config "functional-340022": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 13:01:08.934248 1540032 config.go:182] Loaded profile config "functional-340022": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 13:01:08.934682 1540032 cli_runner.go:164] Run: docker container inspect functional-340022 --format={{.State.Status}}
I0407 13:01:08.955347 1540032 ssh_runner.go:195] Run: systemctl --version
I0407 13:01:08.955403 1540032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-340022
I0407 13:01:08.979326 1540032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34311 SSHKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/functional-340022/id_rsa Username:docker}
I0407 13:01:09.064093 1540032 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-340022 ssh pgrep buildkitd: exit status 1 (363.011196ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 image build -t localhost/my-image:functional-340022 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-arm64 -p functional-340022 image build -t localhost/my-image:functional-340022 testdata/build --alsologtostderr: (2.913551354s)
functional_test.go:340: (dbg) Stderr: out/minikube-linux-arm64 -p functional-340022 image build -t localhost/my-image:functional-340022 testdata/build --alsologtostderr:
I0407 13:01:09.548627 1540234 out.go:345] Setting OutFile to fd 1 ...
I0407 13:01:09.550256 1540234 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:01:09.550285 1540234 out.go:358] Setting ErrFile to fd 2...
I0407 13:01:09.550293 1540234 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:01:09.550583 1540234 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-1489638/.minikube/bin
I0407 13:01:09.551239 1540234 config.go:182] Loaded profile config "functional-340022": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 13:01:09.553223 1540234 config.go:182] Loaded profile config "functional-340022": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 13:01:09.553708 1540234 cli_runner.go:164] Run: docker container inspect functional-340022 --format={{.State.Status}}
I0407 13:01:09.577804 1540234 ssh_runner.go:195] Run: systemctl --version
I0407 13:01:09.577862 1540234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-340022
I0407 13:01:09.607642 1540234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34311 SSHKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/functional-340022/id_rsa Username:docker}
I0407 13:01:09.697953 1540234 build_images.go:161] Building image from path: /tmp/build.300340919.tar
I0407 13:01:09.698023 1540234 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0407 13:01:09.707865 1540234 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.300340919.tar
I0407 13:01:09.711339 1540234 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.300340919.tar: stat -c "%s %y" /var/lib/minikube/build/build.300340919.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.300340919.tar': No such file or directory
I0407 13:01:09.711363 1540234 ssh_runner.go:362] scp /tmp/build.300340919.tar --> /var/lib/minikube/build/build.300340919.tar (3072 bytes)
I0407 13:01:09.747328 1540234 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.300340919
I0407 13:01:09.756984 1540234 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.300340919 -xf /var/lib/minikube/build/build.300340919.tar
I0407 13:01:09.766211 1540234 docker.go:360] Building image: /var/lib/minikube/build/build.300340919
I0407 13:01:09.766292 1540234 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-340022 /var/lib/minikube/build/build.300340919
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:f8143a81c1e4f0ba035df98c79b29a5606d08ce1dac6fd3d6694281cca5c8b43 done
#8 naming to localhost/my-image:functional-340022 done
#8 DONE 0.1s
I0407 13:01:12.344330 1540234 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-340022 /var/lib/minikube/build/build.300340919: (2.578011531s)
I0407 13:01:12.344398 1540234 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.300340919
I0407 13:01:12.353247 1540234 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.300340919.tar
I0407 13:01:12.366036 1540234 build_images.go:217] Built localhost/my-image:functional-340022 from /tmp/build.300340919.tar
I0407 13:01:12.366068 1540234 build_images.go:133] succeeded building to: functional-340022
I0407 13:01:12.366074 1540234 build_images.go:134] failed building to: 
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-340022
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 image load --daemon kicbase/echo-server:functional-340022 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 image load --daemon kicbase/echo-server:functional-340022 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-340022
functional_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 image load --daemon kicbase/echo-server:functional-340022 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 image save kicbase/echo-server:functional-340022 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 image rm kicbase/echo-server:functional-340022 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-340022
functional_test.go:441: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 image save --daemon kicbase/echo-server:functional-340022 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect kicbase/echo-server:functional-340022
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:516: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-340022 docker-env) && out/minikube-linux-arm64 status -p functional-340022"
functional_test.go:539: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-340022 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-arm64 -p functional-340022 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-340022
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-340022
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-340022
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (128.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-785864 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0407 13:01:52.120325 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/addons-378486/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-785864 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m7.358751358s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (128.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (43.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-785864 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-785864 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-785864 -- rollout status deployment/busybox: (4.4259928s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-785864 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0407 13:03:28.418800 1495026 retry.go:31] will retry after 619.673466ms: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-785864 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0407 13:03:29.230159 1495026 retry.go:31] will retry after 2.07059542s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-785864 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0407 13:03:31.474231 1495026 retry.go:31] will retry after 3.201133463s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-785864 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0407 13:03:34.836037 1495026 retry.go:31] will retry after 3.112832891s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-785864 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0407 13:03:38.154164 1495026 retry.go:31] will retry after 3.638198305s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-785864 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0407 13:03:41.967781 1495026 retry.go:31] will retry after 8.371743407s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-785864 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0407 13:03:50.534699 1495026 retry.go:31] will retry after 13.505128382s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-785864 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-785864 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-785864 -- exec busybox-58667487b6-74srx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-785864 -- exec busybox-58667487b6-7blkb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-785864 -- exec busybox-58667487b6-ks42m -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-785864 -- exec busybox-58667487b6-74srx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-785864 -- exec busybox-58667487b6-7blkb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-785864 -- exec busybox-58667487b6-ks42m -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-785864 -- exec busybox-58667487b6-74srx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-785864 -- exec busybox-58667487b6-7blkb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-785864 -- exec busybox-58667487b6-ks42m -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (43.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-785864 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-785864 -- exec busybox-58667487b6-74srx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-785864 -- exec busybox-58667487b6-74srx -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-785864 -- exec busybox-58667487b6-7blkb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-785864 -- exec busybox-58667487b6-7blkb -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-785864 -- exec busybox-58667487b6-ks42m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
E0407 13:04:08.258854 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/addons-378486/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-785864 -- exec busybox-58667487b6-ks42m -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (26.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-785864 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-785864 -v=7 --alsologtostderr: (24.987150037s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-785864 status -v=7 --alsologtostderr: (1.018648161s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (26.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-785864 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.034141743s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 status --output json -v=7 --alsologtostderr
E0407 13:04:35.962655 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/addons-378486/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-785864 status --output json -v=7 --alsologtostderr: (1.002951778s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 cp testdata/cp-test.txt ha-785864:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 ssh -n ha-785864 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 cp ha-785864:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile634133161/001/cp-test_ha-785864.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 ssh -n ha-785864 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 cp ha-785864:/home/docker/cp-test.txt ha-785864-m02:/home/docker/cp-test_ha-785864_ha-785864-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 ssh -n ha-785864 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 ssh -n ha-785864-m02 "sudo cat /home/docker/cp-test_ha-785864_ha-785864-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 cp ha-785864:/home/docker/cp-test.txt ha-785864-m03:/home/docker/cp-test_ha-785864_ha-785864-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 ssh -n ha-785864 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 ssh -n ha-785864-m03 "sudo cat /home/docker/cp-test_ha-785864_ha-785864-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 cp ha-785864:/home/docker/cp-test.txt ha-785864-m04:/home/docker/cp-test_ha-785864_ha-785864-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 ssh -n ha-785864 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 ssh -n ha-785864-m04 "sudo cat /home/docker/cp-test_ha-785864_ha-785864-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 cp testdata/cp-test.txt ha-785864-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 ssh -n ha-785864-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 cp ha-785864-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile634133161/001/cp-test_ha-785864-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 ssh -n ha-785864-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 cp ha-785864-m02:/home/docker/cp-test.txt ha-785864:/home/docker/cp-test_ha-785864-m02_ha-785864.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 ssh -n ha-785864-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 ssh -n ha-785864 "sudo cat /home/docker/cp-test_ha-785864-m02_ha-785864.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 cp ha-785864-m02:/home/docker/cp-test.txt ha-785864-m03:/home/docker/cp-test_ha-785864-m02_ha-785864-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 ssh -n ha-785864-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 ssh -n ha-785864-m03 "sudo cat /home/docker/cp-test_ha-785864-m02_ha-785864-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 cp ha-785864-m02:/home/docker/cp-test.txt ha-785864-m04:/home/docker/cp-test_ha-785864-m02_ha-785864-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 ssh -n ha-785864-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 ssh -n ha-785864-m04 "sudo cat /home/docker/cp-test_ha-785864-m02_ha-785864-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 cp testdata/cp-test.txt ha-785864-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 ssh -n ha-785864-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 cp ha-785864-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile634133161/001/cp-test_ha-785864-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 ssh -n ha-785864-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 cp ha-785864-m03:/home/docker/cp-test.txt ha-785864:/home/docker/cp-test_ha-785864-m03_ha-785864.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 ssh -n ha-785864-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 ssh -n ha-785864 "sudo cat /home/docker/cp-test_ha-785864-m03_ha-785864.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 cp ha-785864-m03:/home/docker/cp-test.txt ha-785864-m02:/home/docker/cp-test_ha-785864-m03_ha-785864-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 ssh -n ha-785864-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 ssh -n ha-785864-m02 "sudo cat /home/docker/cp-test_ha-785864-m03_ha-785864-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 cp ha-785864-m03:/home/docker/cp-test.txt ha-785864-m04:/home/docker/cp-test_ha-785864-m03_ha-785864-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 ssh -n ha-785864-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 ssh -n ha-785864-m04 "sudo cat /home/docker/cp-test_ha-785864-m03_ha-785864-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 cp testdata/cp-test.txt ha-785864-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 ssh -n ha-785864-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 cp ha-785864-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile634133161/001/cp-test_ha-785864-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 ssh -n ha-785864-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 cp ha-785864-m04:/home/docker/cp-test.txt ha-785864:/home/docker/cp-test_ha-785864-m04_ha-785864.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 ssh -n ha-785864-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 ssh -n ha-785864 "sudo cat /home/docker/cp-test_ha-785864-m04_ha-785864.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 cp ha-785864-m04:/home/docker/cp-test.txt ha-785864-m02:/home/docker/cp-test_ha-785864-m04_ha-785864-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 ssh -n ha-785864-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 ssh -n ha-785864-m02 "sudo cat /home/docker/cp-test_ha-785864-m04_ha-785864-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 cp ha-785864-m04:/home/docker/cp-test.txt ha-785864-m03:/home/docker/cp-test_ha-785864-m04_ha-785864-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 ssh -n ha-785864-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 ssh -n ha-785864-m03 "sudo cat /home/docker/cp-test_ha-785864-m04_ha-785864-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-785864 node stop m02 -v=7 --alsologtostderr: (10.969256009s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-785864 status -v=7 --alsologtostderr: exit status 7 (706.823692ms)

                                                
                                                
-- stdout --
	ha-785864
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-785864-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-785864-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-785864-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 13:05:06.636044 1563958 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:05:06.636215 1563958 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:05:06.636247 1563958 out.go:358] Setting ErrFile to fd 2...
	I0407 13:05:06.636267 1563958 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:05:06.636563 1563958 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-1489638/.minikube/bin
	I0407 13:05:06.636821 1563958 out.go:352] Setting JSON to false
	I0407 13:05:06.636901 1563958 mustload.go:65] Loading cluster: ha-785864
	I0407 13:05:06.636953 1563958 notify.go:220] Checking for updates...
	I0407 13:05:06.637487 1563958 config.go:182] Loaded profile config "ha-785864": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 13:05:06.637565 1563958 status.go:174] checking status of ha-785864 ...
	I0407 13:05:06.638494 1563958 cli_runner.go:164] Run: docker container inspect ha-785864 --format={{.State.Status}}
	I0407 13:05:06.659309 1563958 status.go:371] ha-785864 host status = "Running" (err=<nil>)
	I0407 13:05:06.659334 1563958 host.go:66] Checking if "ha-785864" exists ...
	I0407 13:05:06.659685 1563958 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-785864
	I0407 13:05:06.683164 1563958 host.go:66] Checking if "ha-785864" exists ...
	I0407 13:05:06.683462 1563958 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 13:05:06.683566 1563958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-785864
	I0407 13:05:06.706252 1563958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34316 SSHKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/ha-785864/id_rsa Username:docker}
	I0407 13:05:06.792423 1563958 ssh_runner.go:195] Run: systemctl --version
	I0407 13:05:06.796445 1563958 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:05:06.808213 1563958 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 13:05:06.864485 1563958 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-04-07 13:05:06.855452882 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0407 13:05:06.865026 1563958 kubeconfig.go:125] found "ha-785864" server: "https://192.168.49.254:8443"
	I0407 13:05:06.865062 1563958 api_server.go:166] Checking apiserver status ...
	I0407 13:05:06.865106 1563958 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:05:06.876718 1563958 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2403/cgroup
	I0407 13:05:06.886143 1563958 api_server.go:182] apiserver freezer: "12:freezer:/docker/47691476b3cd8aa976372d280d1bb5eda7fd46340ca4a9d944953985d102f75b/kubepods/burstable/pod948d27c1bfb3af0a15c56be1315d4375/50b72a5203241eb557cb52b53b50ef887b5719efb97cb6da98aacb599cc71384"
	I0407 13:05:06.886225 1563958 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/47691476b3cd8aa976372d280d1bb5eda7fd46340ca4a9d944953985d102f75b/kubepods/burstable/pod948d27c1bfb3af0a15c56be1315d4375/50b72a5203241eb557cb52b53b50ef887b5719efb97cb6da98aacb599cc71384/freezer.state
	I0407 13:05:06.894810 1563958 api_server.go:204] freezer state: "THAWED"
	I0407 13:05:06.894838 1563958 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0407 13:05:06.902877 1563958 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0407 13:05:06.902921 1563958 status.go:463] ha-785864 apiserver status = Running (err=<nil>)
	I0407 13:05:06.902933 1563958 status.go:176] ha-785864 status: &{Name:ha-785864 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 13:05:06.902954 1563958 status.go:174] checking status of ha-785864-m02 ...
	I0407 13:05:06.903283 1563958 cli_runner.go:164] Run: docker container inspect ha-785864-m02 --format={{.State.Status}}
	I0407 13:05:06.925064 1563958 status.go:371] ha-785864-m02 host status = "Stopped" (err=<nil>)
	I0407 13:05:06.925089 1563958 status.go:384] host is not running, skipping remaining checks
	I0407 13:05:06.925097 1563958 status.go:176] ha-785864-m02 status: &{Name:ha-785864-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 13:05:06.925118 1563958 status.go:174] checking status of ha-785864-m03 ...
	I0407 13:05:06.925431 1563958 cli_runner.go:164] Run: docker container inspect ha-785864-m03 --format={{.State.Status}}
	I0407 13:05:06.942892 1563958 status.go:371] ha-785864-m03 host status = "Running" (err=<nil>)
	I0407 13:05:06.942915 1563958 host.go:66] Checking if "ha-785864-m03" exists ...
	I0407 13:05:06.943226 1563958 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-785864-m03
	I0407 13:05:06.963878 1563958 host.go:66] Checking if "ha-785864-m03" exists ...
	I0407 13:05:06.964221 1563958 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 13:05:06.964268 1563958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-785864-m03
	I0407 13:05:06.982115 1563958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34326 SSHKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/ha-785864-m03/id_rsa Username:docker}
	I0407 13:05:07.069892 1563958 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:05:07.082274 1563958 kubeconfig.go:125] found "ha-785864" server: "https://192.168.49.254:8443"
	I0407 13:05:07.082377 1563958 api_server.go:166] Checking apiserver status ...
	I0407 13:05:07.082476 1563958 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:05:07.094496 1563958 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2438/cgroup
	I0407 13:05:07.104523 1563958 api_server.go:182] apiserver freezer: "12:freezer:/docker/f8c98112acfed2b449058baa06ad667c6234f8ea0091705c8423346ffa7f3cce/kubepods/burstable/pod4b20366d285566ad2beb970e23b55184/bd88a6e04814216b77b9714a382f958655b106d0f1aa8743b1bf87f3cd785369"
	I0407 13:05:07.104603 1563958 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f8c98112acfed2b449058baa06ad667c6234f8ea0091705c8423346ffa7f3cce/kubepods/burstable/pod4b20366d285566ad2beb970e23b55184/bd88a6e04814216b77b9714a382f958655b106d0f1aa8743b1bf87f3cd785369/freezer.state
	I0407 13:05:07.117395 1563958 api_server.go:204] freezer state: "THAWED"
	I0407 13:05:07.117422 1563958 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0407 13:05:07.125267 1563958 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0407 13:05:07.125343 1563958 status.go:463] ha-785864-m03 apiserver status = Running (err=<nil>)
	I0407 13:05:07.125365 1563958 status.go:176] ha-785864-m03 status: &{Name:ha-785864-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 13:05:07.125409 1563958 status.go:174] checking status of ha-785864-m04 ...
	I0407 13:05:07.125762 1563958 cli_runner.go:164] Run: docker container inspect ha-785864-m04 --format={{.State.Status}}
	I0407 13:05:07.145139 1563958 status.go:371] ha-785864-m04 host status = "Running" (err=<nil>)
	I0407 13:05:07.145162 1563958 host.go:66] Checking if "ha-785864-m04" exists ...
	I0407 13:05:07.145458 1563958 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-785864-m04
	I0407 13:05:07.164115 1563958 host.go:66] Checking if "ha-785864-m04" exists ...
	I0407 13:05:07.164433 1563958 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 13:05:07.164484 1563958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-785864-m04
	I0407 13:05:07.189235 1563958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34331 SSHKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/ha-785864-m04/id_rsa Username:docker}
	I0407 13:05:07.276891 1563958 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:05:07.289733 1563958 status.go:176] ha-785864-m04 status: &{Name:ha-785864-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (39.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 node start m02 -v=7 --alsologtostderr
E0407 13:05:17.617535 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/functional-340022/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:05:17.623904 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/functional-340022/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:05:17.635283 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/functional-340022/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:05:17.656779 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/functional-340022/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:05:17.698111 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/functional-340022/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:05:17.779734 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/functional-340022/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:05:17.941110 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/functional-340022/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:05:18.262807 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/functional-340022/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:05:18.904922 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/functional-340022/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:05:20.186374 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/functional-340022/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:05:22.747637 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/functional-340022/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:05:27.868880 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/functional-340022/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:05:38.111145 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/functional-340022/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-785864 node start m02 -v=7 --alsologtostderr: (37.840186811s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-785864 status -v=7 --alsologtostderr: (1.197085032s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (39.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.560034043s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (294.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-785864 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-785864 -v=7 --alsologtostderr
E0407 13:05:58.593073 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/functional-340022/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-785864 -v=7 --alsologtostderr: (35.090044591s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-785864 --wait=true -v=7 --alsologtostderr
E0407 13:06:39.554354 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/functional-340022/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:08:01.476402 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/functional-340022/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:09:08.258672 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/addons-378486/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:10:17.615195 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/functional-340022/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-785864 --wait=true -v=7 --alsologtostderr: (4m19.257705764s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-785864
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (294.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 node delete m03 -v=7 --alsologtostderr
E0407 13:10:45.318922 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/functional-340022/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-785864 node delete m03 -v=7 --alsologtostderr: (10.223006508s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-785864 stop -v=7 --alsologtostderr: (32.706079916s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-785864 status -v=7 --alsologtostderr: exit status 7 (118.319778ms)

                                                
                                                
-- stdout --
	ha-785864
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-785864-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-785864-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 13:11:28.040537 1594078 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:11:28.040664 1594078 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:11:28.040675 1594078 out.go:358] Setting ErrFile to fd 2...
	I0407 13:11:28.040681 1594078 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:11:28.040962 1594078 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-1489638/.minikube/bin
	I0407 13:11:28.041153 1594078 out.go:352] Setting JSON to false
	I0407 13:11:28.041192 1594078 mustload.go:65] Loading cluster: ha-785864
	I0407 13:11:28.041261 1594078 notify.go:220] Checking for updates...
	I0407 13:11:28.042159 1594078 config.go:182] Loaded profile config "ha-785864": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 13:11:28.042186 1594078 status.go:174] checking status of ha-785864 ...
	I0407 13:11:28.042696 1594078 cli_runner.go:164] Run: docker container inspect ha-785864 --format={{.State.Status}}
	I0407 13:11:28.063225 1594078 status.go:371] ha-785864 host status = "Stopped" (err=<nil>)
	I0407 13:11:28.063245 1594078 status.go:384] host is not running, skipping remaining checks
	I0407 13:11:28.063252 1594078 status.go:176] ha-785864 status: &{Name:ha-785864 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 13:11:28.063285 1594078 status.go:174] checking status of ha-785864-m02 ...
	I0407 13:11:28.063686 1594078 cli_runner.go:164] Run: docker container inspect ha-785864-m02 --format={{.State.Status}}
	I0407 13:11:28.089630 1594078 status.go:371] ha-785864-m02 host status = "Stopped" (err=<nil>)
	I0407 13:11:28.089655 1594078 status.go:384] host is not running, skipping remaining checks
	I0407 13:11:28.089662 1594078 status.go:176] ha-785864-m02 status: &{Name:ha-785864-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 13:11:28.089682 1594078 status.go:174] checking status of ha-785864-m04 ...
	I0407 13:11:28.089967 1594078 cli_runner.go:164] Run: docker container inspect ha-785864-m04 --format={{.State.Status}}
	I0407 13:11:28.106762 1594078 status.go:371] ha-785864-m04 host status = "Stopped" (err=<nil>)
	I0407 13:11:28.106784 1594078 status.go:384] host is not running, skipping remaining checks
	I0407 13:11:28.106793 1594078 status.go:176] ha-785864-m04 status: &{Name:ha-785864-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (87.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-785864 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-785864 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m26.668002658s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (87.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (44.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-785864 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-785864 --control-plane -v=7 --alsologtostderr: (43.688857559s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-785864 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (44.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.018202315s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.02s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (32.65s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-759346 --driver=docker  --container-runtime=docker
E0407 13:14:08.258663 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/addons-378486/client.crt: no such file or directory" logger="UnhandledError"
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-759346 --driver=docker  --container-runtime=docker: (32.653405343s)
--- PASS: TestImageBuild/serial/Setup (32.65s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.89s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-759346
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-759346: (1.890826386s)
--- PASS: TestImageBuild/serial/NormalBuild (1.89s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.02s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-759346
image_test.go:99: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-759346: (1.023117122s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.02s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.8s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-759346
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.80s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.71s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-759346
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.71s)

                                                
                                    
x
+
TestJSONOutput/start/Command (43.04s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-101462 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-101462 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (43.037707492s)
--- PASS: TestJSONOutput/start/Command (43.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-101462 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.51s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-101462 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.51s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.89s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-101462 --output=json --user=testUser
E0407 13:15:17.615846 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/functional-340022/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-101462 --output=json --user=testUser: (10.88943368s)
--- PASS: TestJSONOutput/stop/Command (10.89s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-858384 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-858384 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (93.637056ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4ee372e9-5771-4298-ab1b-668295152e1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-858384] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c74a963a-66f7-405e-9d3a-c3e35c39c56f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20598"}}
	{"specversion":"1.0","id":"4f311e0f-f364-4efd-bfd4-7c51d4496152","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"087f6216-dff8-4a31-9c03-f687c06b09ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20598-1489638/kubeconfig"}}
	{"specversion":"1.0","id":"2a4d6678-2f1c-45a0-acc0-46f154f895b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-1489638/.minikube"}}
	{"specversion":"1.0","id":"ff87dc5e-5302-46d2-a180-71137d425edb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"301e2e13-1ede-48cb-8211-594e908ca3f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8fe3f505-8f3a-42ff-9aaa-1145feacc5ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-858384" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-858384
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (31.02s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-854347 --network=
E0407 13:15:31.324856 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/addons-378486/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-854347 --network=: (28.884905862s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-854347" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-854347
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-854347: (2.106109701s)
--- PASS: TestKicCustomNetwork/create_custom_network (31.02s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (31.86s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-669250 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-669250 --network=bridge: (29.743082792s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-669250" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-669250
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-669250: (2.088848388s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (31.86s)

                                                
                                    
x
+
TestKicExistingNetwork (31.04s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0407 13:16:29.835272 1495026 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0407 13:16:29.850996 1495026 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0407 13:16:29.851712 1495026 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0407 13:16:29.852505 1495026 cli_runner.go:164] Run: docker network inspect existing-network
W0407 13:16:29.868397 1495026 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0407 13:16:29.868430 1495026 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0407 13:16:29.868450 1495026 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0407 13:16:29.868644 1495026 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0407 13:16:29.885113 1495026 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-cb68a24093bb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:32:02:6c:69:0b:7a} reservation:<nil>}
I0407 13:16:29.890624 1495026 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I0407 13:16:29.891026 1495026 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001728770}
I0407 13:16:29.891598 1495026 network_create.go:124] attempt to create docker network existing-network 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
I0407 13:16:29.891669 1495026 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0407 13:16:29.953923 1495026 network_create.go:108] docker network existing-network 192.168.67.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-947399 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-947399 --network=existing-network: (28.660522556s)
helpers_test.go:175: Cleaning up "existing-network-947399" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-947399
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-947399: (2.224181639s)
I0407 13:17:00.856211 1495026 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (31.04s)

                                                
                                    
x
+
TestKicCustomSubnet (36.69s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-013791 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-013791 --subnet=192.168.60.0/24: (34.488308236s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-013791 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-013791" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-013791
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-013791: (2.17947347s)
--- PASS: TestKicCustomSubnet (36.69s)

                                                
                                    
x
+
TestKicStaticIP (32.66s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-932060 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-932060 --static-ip=192.168.200.200: (30.311578391s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-932060 ip
helpers_test.go:175: Cleaning up "static-ip-932060" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-932060
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-932060: (2.184243748s)
--- PASS: TestKicStaticIP (32.66s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (73.25s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-761991 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-761991 --driver=docker  --container-runtime=docker: (34.31561269s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-765093 --driver=docker  --container-runtime=docker
E0407 13:19:08.258604 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/addons-378486/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-765093 --driver=docker  --container-runtime=docker: (33.112156978s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-761991
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-765093
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-765093" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-765093
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-765093: (2.201744139s)
helpers_test.go:175: Cleaning up "first-761991" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-761991
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-761991: (2.245788301s)
--- PASS: TestMinikubeProfile (73.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-587583 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-587583 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.159246924s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-587583 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (10.33s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-589528 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-589528 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (9.331670136s)
--- PASS: TestMountStart/serial/StartWithMountSecond (10.33s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-589528 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.47s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-587583 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-587583 --alsologtostderr -v=5: (1.474570184s)
--- PASS: TestMountStart/serial/DeleteFirst (1.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-589528 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-589528
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-589528: (1.199627429s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.04s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-589528
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-589528: (7.038459301s)
--- PASS: TestMountStart/serial/RestartStopped (8.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-589528 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (83.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-795040 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0407 13:20:17.615963 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/functional-340022/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-795040 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m23.142754334s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (83.73s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (36.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-795040 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-795040 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-795040 -- rollout status deployment/busybox: (3.317920682s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-795040 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0407 13:21:22.919440 1495026 retry.go:31] will retry after 1.416622378s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-795040 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0407 13:21:24.505428 1495026 retry.go:31] will retry after 2.09708691s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-795040 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0407 13:21:26.764811 1495026 retry.go:31] will retry after 2.133411891s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-795040 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0407 13:21:29.038143 1495026 retry.go:31] will retry after 4.741881502s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-795040 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0407 13:21:33.944259 1495026 retry.go:31] will retry after 4.890743367s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-795040 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0407 13:21:38.991348 1495026 retry.go:31] will retry after 6.560702798s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
E0407 13:21:40.681074 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/functional-340022/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-795040 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0407 13:21:45.704094 1495026 retry.go:31] will retry after 8.173351705s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-795040 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-795040 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-795040 -- exec busybox-58667487b6-5v9nb -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-795040 -- exec busybox-58667487b6-q6p8d -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-795040 -- exec busybox-58667487b6-5v9nb -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-795040 -- exec busybox-58667487b6-q6p8d -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-795040 -- exec busybox-58667487b6-5v9nb -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-795040 -- exec busybox-58667487b6-q6p8d -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (36.43s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-795040 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-795040 -- exec busybox-58667487b6-5v9nb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-795040 -- exec busybox-58667487b6-5v9nb -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-795040 -- exec busybox-58667487b6-q6p8d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-795040 -- exec busybox-58667487b6-q6p8d -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.04s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (16.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-795040 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-795040 -v 3 --alsologtostderr: (15.564366624s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (16.40s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-795040 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.12s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 cp testdata/cp-test.txt multinode-795040:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 ssh -n multinode-795040 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 cp multinode-795040:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile11825836/001/cp-test_multinode-795040.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 ssh -n multinode-795040 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 cp multinode-795040:/home/docker/cp-test.txt multinode-795040-m02:/home/docker/cp-test_multinode-795040_multinode-795040-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 ssh -n multinode-795040 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 ssh -n multinode-795040-m02 "sudo cat /home/docker/cp-test_multinode-795040_multinode-795040-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 cp multinode-795040:/home/docker/cp-test.txt multinode-795040-m03:/home/docker/cp-test_multinode-795040_multinode-795040-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 ssh -n multinode-795040 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 ssh -n multinode-795040-m03 "sudo cat /home/docker/cp-test_multinode-795040_multinode-795040-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 cp testdata/cp-test.txt multinode-795040-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 ssh -n multinode-795040-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 cp multinode-795040-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile11825836/001/cp-test_multinode-795040-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 ssh -n multinode-795040-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 cp multinode-795040-m02:/home/docker/cp-test.txt multinode-795040:/home/docker/cp-test_multinode-795040-m02_multinode-795040.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 ssh -n multinode-795040-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 ssh -n multinode-795040 "sudo cat /home/docker/cp-test_multinode-795040-m02_multinode-795040.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 cp multinode-795040-m02:/home/docker/cp-test.txt multinode-795040-m03:/home/docker/cp-test_multinode-795040-m02_multinode-795040-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 ssh -n multinode-795040-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 ssh -n multinode-795040-m03 "sudo cat /home/docker/cp-test_multinode-795040-m02_multinode-795040-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 cp testdata/cp-test.txt multinode-795040-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 ssh -n multinode-795040-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 cp multinode-795040-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile11825836/001/cp-test_multinode-795040-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 ssh -n multinode-795040-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 cp multinode-795040-m03:/home/docker/cp-test.txt multinode-795040:/home/docker/cp-test_multinode-795040-m03_multinode-795040.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 ssh -n multinode-795040-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 ssh -n multinode-795040 "sudo cat /home/docker/cp-test_multinode-795040-m03_multinode-795040.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 cp multinode-795040-m03:/home/docker/cp-test.txt multinode-795040-m02:/home/docker/cp-test_multinode-795040-m03_multinode-795040-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 ssh -n multinode-795040-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 ssh -n multinode-795040-m02 "sudo cat /home/docker/cp-test_multinode-795040-m03_multinode-795040-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.03s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-795040 node stop m03: (1.215297436s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-795040 status: exit status 7 (538.619987ms)

                                                
                                                
-- stdout --
	multinode-795040
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-795040-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-795040-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-795040 status --alsologtostderr: exit status 7 (544.955212ms)

                                                
                                                
-- stdout --
	multinode-795040
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-795040-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-795040-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 13:22:25.791775 1672154 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:22:25.792000 1672154 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:22:25.792033 1672154 out.go:358] Setting ErrFile to fd 2...
	I0407 13:22:25.792053 1672154 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:22:25.792306 1672154 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-1489638/.minikube/bin
	I0407 13:22:25.792520 1672154 out.go:352] Setting JSON to false
	I0407 13:22:25.792577 1672154 mustload.go:65] Loading cluster: multinode-795040
	I0407 13:22:25.792645 1672154 notify.go:220] Checking for updates...
	I0407 13:22:25.793954 1672154 config.go:182] Loaded profile config "multinode-795040": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 13:22:25.794011 1672154 status.go:174] checking status of multinode-795040 ...
	I0407 13:22:25.794676 1672154 cli_runner.go:164] Run: docker container inspect multinode-795040 --format={{.State.Status}}
	I0407 13:22:25.814812 1672154 status.go:371] multinode-795040 host status = "Running" (err=<nil>)
	I0407 13:22:25.814832 1672154 host.go:66] Checking if "multinode-795040" exists ...
	I0407 13:22:25.815148 1672154 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-795040
	I0407 13:22:25.840753 1672154 host.go:66] Checking if "multinode-795040" exists ...
	I0407 13:22:25.841051 1672154 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 13:22:25.841102 1672154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-795040
	I0407 13:22:25.859667 1672154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34441 SSHKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/multinode-795040/id_rsa Username:docker}
	I0407 13:22:25.944718 1672154 ssh_runner.go:195] Run: systemctl --version
	I0407 13:22:25.952981 1672154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:22:25.964523 1672154 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 13:22:26.019930 1672154 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-04-07 13:22:26.010023657 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0407 13:22:26.020487 1672154 kubeconfig.go:125] found "multinode-795040" server: "https://192.168.58.2:8443"
	I0407 13:22:26.020524 1672154 api_server.go:166] Checking apiserver status ...
	I0407 13:22:26.020569 1672154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:22:26.032775 1672154 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2468/cgroup
	I0407 13:22:26.042608 1672154 api_server.go:182] apiserver freezer: "12:freezer:/docker/913f106ed3030a66814d8976c1b2a56ef5b594b2156e70b937384feb3219b76a/kubepods/burstable/pod882e8785e4553d6b019599d47cd1423a/4b17d3f81bf04ab66f2ff26ae16b10fc8a03ce7895d5de7f144b6968c74e4e5a"
	I0407 13:22:26.042685 1672154 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/913f106ed3030a66814d8976c1b2a56ef5b594b2156e70b937384feb3219b76a/kubepods/burstable/pod882e8785e4553d6b019599d47cd1423a/4b17d3f81bf04ab66f2ff26ae16b10fc8a03ce7895d5de7f144b6968c74e4e5a/freezer.state
	I0407 13:22:26.051751 1672154 api_server.go:204] freezer state: "THAWED"
	I0407 13:22:26.051783 1672154 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0407 13:22:26.060065 1672154 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0407 13:22:26.060093 1672154 status.go:463] multinode-795040 apiserver status = Running (err=<nil>)
	I0407 13:22:26.060104 1672154 status.go:176] multinode-795040 status: &{Name:multinode-795040 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 13:22:26.060121 1672154 status.go:174] checking status of multinode-795040-m02 ...
	I0407 13:22:26.060438 1672154 cli_runner.go:164] Run: docker container inspect multinode-795040-m02 --format={{.State.Status}}
	I0407 13:22:26.078825 1672154 status.go:371] multinode-795040-m02 host status = "Running" (err=<nil>)
	I0407 13:22:26.078847 1672154 host.go:66] Checking if "multinode-795040-m02" exists ...
	I0407 13:22:26.079158 1672154 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-795040-m02
	I0407 13:22:26.096327 1672154 host.go:66] Checking if "multinode-795040-m02" exists ...
	I0407 13:22:26.096655 1672154 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 13:22:26.096699 1672154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-795040-m02
	I0407 13:22:26.115446 1672154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34446 SSHKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/multinode-795040-m02/id_rsa Username:docker}
	I0407 13:22:26.209109 1672154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:22:26.226375 1672154 status.go:176] multinode-795040-m02 status: &{Name:multinode-795040-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0407 13:22:26.226405 1672154 status.go:174] checking status of multinode-795040-m03 ...
	I0407 13:22:26.226721 1672154 cli_runner.go:164] Run: docker container inspect multinode-795040-m03 --format={{.State.Status}}
	I0407 13:22:26.249088 1672154 status.go:371] multinode-795040-m03 host status = "Stopped" (err=<nil>)
	I0407 13:22:26.249109 1672154 status.go:384] host is not running, skipping remaining checks
	I0407 13:22:26.249115 1672154 status.go:176] multinode-795040-m03 status: &{Name:multinode-795040-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.30s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-795040 node start m03 -v=7 --alsologtostderr: (10.740503493s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.51s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (85.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-795040
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-795040
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-795040: (22.863186435s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-795040 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-795040 --wait=true -v=8 --alsologtostderr: (1m2.189263253s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-795040
--- PASS: TestMultiNode/serial/RestartKeepsNodes (85.19s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-795040 node delete m03: (4.636783806s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.31s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 stop
E0407 13:24:08.258220 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/addons-378486/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-795040 stop: (21.584817515s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-795040 status: exit status 7 (97.684539ms)

                                                
                                                
-- stdout --
	multinode-795040
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-795040-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-795040 status --alsologtostderr: exit status 7 (122.929848ms)

                                                
                                                
-- stdout --
	multinode-795040
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-795040-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 13:24:29.991812 1685882 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:24:29.992003 1685882 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:24:29.992016 1685882 out.go:358] Setting ErrFile to fd 2...
	I0407 13:24:29.992022 1685882 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:24:29.992321 1685882 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-1489638/.minikube/bin
	I0407 13:24:29.992547 1685882 out.go:352] Setting JSON to false
	I0407 13:24:29.992602 1685882 mustload.go:65] Loading cluster: multinode-795040
	I0407 13:24:29.992641 1685882 notify.go:220] Checking for updates...
	I0407 13:24:29.993046 1685882 config.go:182] Loaded profile config "multinode-795040": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 13:24:29.993070 1685882 status.go:174] checking status of multinode-795040 ...
	I0407 13:24:29.993645 1685882 cli_runner.go:164] Run: docker container inspect multinode-795040 --format={{.State.Status}}
	I0407 13:24:30.034979 1685882 status.go:371] multinode-795040 host status = "Stopped" (err=<nil>)
	I0407 13:24:30.035006 1685882 status.go:384] host is not running, skipping remaining checks
	I0407 13:24:30.035014 1685882 status.go:176] multinode-795040 status: &{Name:multinode-795040 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 13:24:30.035059 1685882 status.go:174] checking status of multinode-795040-m02 ...
	I0407 13:24:30.035392 1685882 cli_runner.go:164] Run: docker container inspect multinode-795040-m02 --format={{.State.Status}}
	I0407 13:24:30.057747 1685882 status.go:371] multinode-795040-m02 host status = "Stopped" (err=<nil>)
	I0407 13:24:30.057774 1685882 status.go:384] host is not running, skipping remaining checks
	I0407 13:24:30.057783 1685882 status.go:176] multinode-795040-m02 status: &{Name:multinode-795040-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (56.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-795040 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0407 13:25:17.615621 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/functional-340022/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-795040 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (56.215824592s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-795040 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (56.99s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-795040
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-795040-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-795040-m02 --driver=docker  --container-runtime=docker: exit status 14 (115.285313ms)

                                                
                                                
-- stdout --
	* [multinode-795040-m02] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20598-1489638/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-1489638/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-795040-m02' is duplicated with machine name 'multinode-795040-m02' in profile 'multinode-795040'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-795040-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-795040-m03 --driver=docker  --container-runtime=docker: (31.082366863s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-795040
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-795040: exit status 80 (331.480376ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-795040 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-795040-m03 already exists in multinode-795040-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-795040-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-795040-m03: (2.293042474s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.90s)

                                                
                                    
x
+
TestPreload (140.09s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-571224 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-571224 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m42.579038388s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-571224 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-571224 image pull gcr.io/k8s-minikube/busybox: (2.162255678s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-571224
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-571224: (10.851087748s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-571224 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-571224 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (21.983811719s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-571224 image list
helpers_test.go:175: Cleaning up "test-preload-571224" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-571224
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-571224: (2.224274906s)
--- PASS: TestPreload (140.09s)

                                                
                                    
x
+
TestScheduledStopUnix (107.02s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-067443 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-067443 --memory=2048 --driver=docker  --container-runtime=docker: (33.770146993s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-067443 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-067443 -n scheduled-stop-067443
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-067443 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0407 13:28:59.521133 1495026 retry.go:31] will retry after 132.751µs: open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/scheduled-stop-067443/pid: no such file or directory
I0407 13:28:59.522311 1495026 retry.go:31] will retry after 119.016µs: open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/scheduled-stop-067443/pid: no such file or directory
I0407 13:28:59.523527 1495026 retry.go:31] will retry after 294.593µs: open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/scheduled-stop-067443/pid: no such file or directory
I0407 13:28:59.524686 1495026 retry.go:31] will retry after 485.192µs: open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/scheduled-stop-067443/pid: no such file or directory
I0407 13:28:59.525860 1495026 retry.go:31] will retry after 687.904µs: open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/scheduled-stop-067443/pid: no such file or directory
I0407 13:28:59.526659 1495026 retry.go:31] will retry after 550.314µs: open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/scheduled-stop-067443/pid: no such file or directory
I0407 13:28:59.528219 1495026 retry.go:31] will retry after 1.070383ms: open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/scheduled-stop-067443/pid: no such file or directory
I0407 13:28:59.530364 1495026 retry.go:31] will retry after 1.156616ms: open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/scheduled-stop-067443/pid: no such file or directory
I0407 13:28:59.531822 1495026 retry.go:31] will retry after 3.585069ms: open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/scheduled-stop-067443/pid: no such file or directory
I0407 13:28:59.536051 1495026 retry.go:31] will retry after 3.038316ms: open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/scheduled-stop-067443/pid: no such file or directory
I0407 13:28:59.539224 1495026 retry.go:31] will retry after 3.830335ms: open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/scheduled-stop-067443/pid: no such file or directory
I0407 13:28:59.543484 1495026 retry.go:31] will retry after 6.625605ms: open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/scheduled-stop-067443/pid: no such file or directory
I0407 13:28:59.550885 1495026 retry.go:31] will retry after 19.207134ms: open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/scheduled-stop-067443/pid: no such file or directory
I0407 13:28:59.571164 1495026 retry.go:31] will retry after 21.447493ms: open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/scheduled-stop-067443/pid: no such file or directory
I0407 13:28:59.593426 1495026 retry.go:31] will retry after 36.381641ms: open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/scheduled-stop-067443/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-067443 --cancel-scheduled
E0407 13:29:08.258331 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/addons-378486/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-067443 -n scheduled-stop-067443
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-067443
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-067443 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-067443
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-067443: exit status 7 (77.662479ms)

                                                
                                                
-- stdout --
	scheduled-stop-067443
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-067443 -n scheduled-stop-067443
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-067443 -n scheduled-stop-067443: exit status 7 (68.204894ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-067443" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-067443
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-067443: (1.721145634s)
--- PASS: TestScheduledStopUnix (107.02s)

                                                
                                    
x
+
TestSkaffold (118.36s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3385686896 version
skaffold_test.go:63: skaffold version: v2.15.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-943047 --memory=2600 --driver=docker  --container-runtime=docker
E0407 13:30:17.615699 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/functional-340022/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-943047 --memory=2600 --driver=docker  --container-runtime=docker: (32.383494859s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3385686896 run --minikube-profile skaffold-943047 --kube-context skaffold-943047 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3385686896 run --minikube-profile skaffold-943047 --kube-context skaffold-943047 --status-check=true --port-forward=false --interactive=false: (1m9.805481431s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-7467bd9f4f-5jdk5" [a9492150-1530-4811-a689-956f379795be] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.002702186s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-7f7b7ddfd4-l8mph" [519b370e-bd02-4299-8dcd-bd5531c20add] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.00330542s
helpers_test.go:175: Cleaning up "skaffold-943047" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-943047
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-943047: (2.997849515s)
--- PASS: TestSkaffold (118.36s)

                                                
                                    
x
+
TestInsufficientStorage (10.28s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-203819 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
E0407 13:32:11.327166 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/addons-378486/client.crt: no such file or directory" logger="UnhandledError"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-203819 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (7.966697275s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1cb52764-57a9-435f-bd4d-b4f449e50f9e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-203819] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d2ccdee2-85d5-470f-ad68-cec035fd8b90","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20598"}}
	{"specversion":"1.0","id":"99430160-e899-4bbc-83b2-a35d607abdf7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0418713d-ce46-44c0-bc2a-02efa804d364","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20598-1489638/kubeconfig"}}
	{"specversion":"1.0","id":"e0266b75-37e0-45c2-b149-a8f2a22b98f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-1489638/.minikube"}}
	{"specversion":"1.0","id":"11a1f23b-6909-431f-aca7-ee35ae845ae2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"02bb0ae7-0820-4148-9589-a1fe5636b8df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"05ae8be7-8f4c-4209-8eb4-95ffd523c28e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"9cea5e2e-abd8-421c-9192-bca08f7b61e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"89607627-c4e5-4719-9c25-3b210a5bf3f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e5c9fe67-6b0e-453d-8b44-70bcd59927a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"16f222f9-dd9b-4081-81c8-9f2e954f36c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-203819\" primary control-plane node in \"insufficient-storage-203819\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"0c7aebeb-341c-413f-bc98-2c99902268d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.46-1743675393-20591 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"1cf394f3-2d64-4ffd-80f1-b9686a33f79f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"9cf3806c-b832-4300-a7dc-b397e91fa01b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-203819 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-203819 --output=json --layout=cluster: exit status 7 (296.260301ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-203819","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-203819","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0407 13:32:18.877877 1721866 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-203819" does not appear in /home/jenkins/minikube-integration/20598-1489638/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-203819 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-203819 --output=json --layout=cluster: exit status 7 (293.920279ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-203819","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-203819","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0407 13:32:19.172917 1721931 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-203819" does not appear in /home/jenkins/minikube-integration/20598-1489638/kubeconfig
	E0407 13:32:19.183122 1721931 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/insufficient-storage-203819/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-203819" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-203819
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-203819: (1.722419872s)
--- PASS: TestInsufficientStorage (10.28s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (77.79s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1687073175 start -p running-upgrade-066842 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0407 13:38:18.542419 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/skaffold-943047/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:38:20.683620 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/functional-340022/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1687073175 start -p running-upgrade-066842 --memory=2200 --vm-driver=docker  --container-runtime=docker: (37.745104596s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-066842 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-066842 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (36.948358466s)
helpers_test.go:175: Cleaning up "running-upgrade-066842" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-066842
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-066842: (2.375310124s)
--- PASS: TestRunningBinaryUpgrade (77.79s)

                                                
                                    
x
+
TestKubernetesUpgrade (385.96s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-667898 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0407 13:34:08.258150 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/addons-378486/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-667898 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m4.508906718s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-667898
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-667898: (11.227373339s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-667898 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-667898 status --format={{.Host}}: exit status 7 (105.474638ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-667898 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-667898 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m42.224672331s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-667898 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-667898 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-667898 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (119.45921ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-667898] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20598-1489638/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-1489638/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-667898
	    minikube start -p kubernetes-upgrade-667898 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6678982 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.2, by running:
	    
	    minikube start -p kubernetes-upgrade-667898 --kubernetes-version=v1.32.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-667898 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-667898 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (25.213010685s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-667898" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-667898
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-667898: (2.437615857s)
--- PASS: TestKubernetesUpgrade (385.96s)

                                                
                                    
x
+
TestMissingContainerUpgrade (161.79s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3570110315 start -p missing-upgrade-303544 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3570110315 start -p missing-upgrade-303544 --memory=2200 --driver=docker  --container-runtime=docker: (1m25.244055639s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-303544
E0407 13:35:17.615787 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/functional-340022/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-303544: (13.730218351s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-303544
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-303544 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-303544 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (59.853597317s)
helpers_test.go:175: Cleaning up "missing-upgrade-303544" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-303544
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-303544: (2.140267812s)
--- PASS: TestMissingContainerUpgrade (161.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-549182 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-549182 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (104.693805ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-549182] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20598-1489638/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-1489638/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-549182 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-549182 --driver=docker  --container-runtime=docker: (39.194940584s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-549182 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-549182 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-549182 --no-kubernetes --driver=docker  --container-runtime=docker: (16.187812639s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-549182 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-549182 status -o json: exit status 2 (296.439144ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-549182","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-549182
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-549182: (1.749167s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-549182 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-549182 --no-kubernetes --driver=docker  --container-runtime=docker: (9.364397267s)
--- PASS: TestNoKubernetes/serial/Start (9.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-549182 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-549182 "sudo systemctl is-active --quiet service kubelet": exit status 1 (280.860336ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-549182
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-549182: (1.22367176s)
--- PASS: TestNoKubernetes/serial/Stop (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-549182 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-549182 --driver=docker  --container-runtime=docker: (7.346057589s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-549182 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-549182 "sudo systemctl is-active --quiet service kubelet": exit status 1 (266.751442ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.79s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (83.27s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1162022421 start -p stopped-upgrade-726842 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0407 13:36:56.602353 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/skaffold-943047/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:36:56.608663 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/skaffold-943047/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:36:56.619966 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/skaffold-943047/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:36:56.641521 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/skaffold-943047/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:36:56.683543 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/skaffold-943047/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:36:56.764877 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/skaffold-943047/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:36:56.926884 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/skaffold-943047/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:36:57.248581 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/skaffold-943047/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:36:57.890811 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/skaffold-943047/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:36:59.173084 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/skaffold-943047/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1162022421 start -p stopped-upgrade-726842 --memory=2200 --vm-driver=docker  --container-runtime=docker: (38.343786248s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1162022421 -p stopped-upgrade-726842 stop
E0407 13:37:01.735159 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/skaffold-943047/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:37:06.857409 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/skaffold-943047/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1162022421 -p stopped-upgrade-726842 stop: (10.886318571s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-726842 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0407 13:37:17.099081 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/skaffold-943047/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:37:37.580683 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/skaffold-943047/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-726842 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (34.040256734s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (83.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.78s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-726842
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-726842: (1.777102034s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.78s)

                                                
                                    
x
+
TestPause/serial/Start (79.58s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-212455 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E0407 13:39:08.258966 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/addons-378486/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:39:40.465605 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/skaffold-943047/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-212455 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m19.578777889s)
--- PASS: TestPause/serial/Start (79.58s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (31.44s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-212455 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-212455 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (31.418235009s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (31.44s)

                                                
                                    
x
+
TestPause/serial/Pause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-212455 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.74s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.41s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-212455 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-212455 --output=json --layout=cluster: exit status 2 (409.183277ms)

                                                
                                                
-- stdout --
	{"Name":"pause-212455","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-212455","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.41s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.84s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-212455 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.84s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.07s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-212455 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-212455 --alsologtostderr -v=5: (1.071886003s)
--- PASS: TestPause/serial/PauseAgain (1.07s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.41s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-212455 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-212455 --alsologtostderr -v=5: (2.41442149s)
--- PASS: TestPause/serial/DeletePaused (2.41s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.19s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-212455
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-212455: exit status 1 (20.105235ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-212455: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (138.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-169187 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0407 13:44:08.259002 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/addons-378486/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-169187 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m18.564926977s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (138.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-872084 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-872084 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2: (1m19.5656883s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-169187 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [47bab58d-535e-4311-a6cd-9de5d96e0b2c] Pending
helpers_test.go:344: "busybox" [47bab58d-535e-4311-a6cd-9de5d96e0b2c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0407 13:45:17.615843 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/functional-340022/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [47bab58d-535e-4311-a6cd-9de5d96e0b2c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003163059s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-169187 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-169187 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-169187 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.282089809s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-169187 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-169187 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-169187 --alsologtostderr -v=3: (11.330081758s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-169187 -n old-k8s-version-169187
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-169187 -n old-k8s-version-169187: exit status 7 (134.175784ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-169187 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-872084 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7b532068-4f02-4668-9bf7-f4be0d9566bc] Pending
helpers_test.go:344: "busybox" [7b532068-4f02-4668-9bf7-f4be0d9566bc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7b532068-4f02-4668-9bf7-f4be0d9566bc] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004042751s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-872084 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-872084 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-872084 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.023280376s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-872084 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-872084 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-872084 --alsologtostderr -v=3: (10.843452025s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-872084 -n default-k8s-diff-port-872084
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-872084 -n default-k8s-diff-port-872084: exit status 7 (70.717016ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-872084 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (266.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-872084 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2
E0407 13:46:56.601982 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/skaffold-943047/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:48:51.330338 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/addons-378486/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:49:08.258909 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/addons-378486/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:50:17.615725 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/functional-340022/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-872084 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2: (4m25.843713169s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-872084 -n default-k8s-diff-port-872084
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (266.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-ctxsc" [0f3980c5-52d6-4c8a-9a31-9a4061fa0fcf] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003001056s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-ctxsc" [0f3980c5-52d6-4c8a-9a31-9a4061fa0fcf] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005603186s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-872084 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-872084 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-872084 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-872084 -n default-k8s-diff-port-872084
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-872084 -n default-k8s-diff-port-872084: exit status 2 (338.923887ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-872084 -n default-k8s-diff-port-872084
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-872084 -n default-k8s-diff-port-872084: exit status 2 (369.933371ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-872084 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-872084 -n default-k8s-diff-port-872084
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-872084 -n default-k8s-diff-port-872084
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (56.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-690840 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-690840 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2: (56.64890078s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (56.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-jg6t2" [cd6f3459-1f34-44b0-8ff5-59f6d4be5f1e] Running
E0407 13:51:56.602064 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/skaffold-943047/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003506341s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-jg6t2" [cd6f3459-1f34-44b0-8ff5-59f6d4be5f1e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005011988s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-169187 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-169187 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-169187 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-169187 -n old-k8s-version-169187
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-169187 -n old-k8s-version-169187: exit status 2 (472.381255ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-169187 -n old-k8s-version-169187
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-169187 -n old-k8s-version-169187: exit status 2 (452.265468ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-169187 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-169187 -n old-k8s-version-169187
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-169187 -n old-k8s-version-169187
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (86.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-474436 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-474436 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2: (1m26.882671392s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (86.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (13.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-690840 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c0c047b1-9a67-4450-94f2-c2c288fc1194] Pending
helpers_test.go:344: "busybox" [c0c047b1-9a67-4450-94f2-c2c288fc1194] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c0c047b1-9a67-4450-94f2-c2c288fc1194] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 13.003545843s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-690840 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (13.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-690840 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-690840 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.232132608s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-690840 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-690840 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-690840 --alsologtostderr -v=3: (11.005688933s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-690840 -n embed-certs-690840
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-690840 -n embed-certs-690840: exit status 7 (90.499107ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-690840 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (266.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-690840 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2
E0407 13:53:19.670697 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/skaffold-943047/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-690840 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2: (4m26.350540039s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-690840 -n embed-certs-690840
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (266.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-474436 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0fa0188d-5f59-40fe-b718-173e3c8b87fc] Pending
helpers_test.go:344: "busybox" [0fa0188d-5f59-40fe-b718-173e3c8b87fc] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.006603625s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-474436 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-474436 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-474436 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.054113958s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-474436 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-474436 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-474436 --alsologtostderr -v=3: (11.018374208s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-474436 -n no-preload-474436
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-474436 -n no-preload-474436: exit status 7 (70.860257ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-474436 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (268s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-474436 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2
E0407 13:54:08.258393 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/addons-378486/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:55:00.685278 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/functional-340022/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:55:13.869594 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/old-k8s-version-169187/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:55:13.875902 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/old-k8s-version-169187/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:55:13.887242 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/old-k8s-version-169187/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:55:13.908604 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/old-k8s-version-169187/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:55:13.950059 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/old-k8s-version-169187/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:55:14.031565 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/old-k8s-version-169187/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:55:14.193074 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/old-k8s-version-169187/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:55:14.514839 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/old-k8s-version-169187/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:55:15.156831 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/old-k8s-version-169187/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:55:16.438251 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/old-k8s-version-169187/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:55:17.615809 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/functional-340022/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:55:18.999660 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/old-k8s-version-169187/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:55:24.121532 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/old-k8s-version-169187/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:55:34.362973 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/old-k8s-version-169187/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:55:54.844544 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/old-k8s-version-169187/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:56:31.348028 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/default-k8s-diff-port-872084/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:56:31.354487 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/default-k8s-diff-port-872084/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:56:31.365890 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/default-k8s-diff-port-872084/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:56:31.387274 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/default-k8s-diff-port-872084/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:56:31.428631 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/default-k8s-diff-port-872084/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:56:31.509996 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/default-k8s-diff-port-872084/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:56:31.671937 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/default-k8s-diff-port-872084/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:56:31.994321 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/default-k8s-diff-port-872084/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:56:32.636097 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/default-k8s-diff-port-872084/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:56:33.917773 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/default-k8s-diff-port-872084/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:56:35.806788 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/old-k8s-version-169187/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:56:36.479909 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/default-k8s-diff-port-872084/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:56:41.601970 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/default-k8s-diff-port-872084/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:56:51.843656 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/default-k8s-diff-port-872084/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:56:56.601441 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/skaffold-943047/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:57:12.325707 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/default-k8s-diff-port-872084/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-474436 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2: (4m27.516636979s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-474436 -n no-preload-474436
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (268.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-mdb8j" [f5fa234e-f576-46c8-a3f7-25e0a65a409d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003555981s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-mdb8j" [f5fa234e-f576-46c8-a3f7-25e0a65a409d] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004662603s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-690840 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-690840 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-690840 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-690840 -n embed-certs-690840
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-690840 -n embed-certs-690840: exit status 2 (341.458337ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-690840 -n embed-certs-690840
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-690840 -n embed-certs-690840: exit status 2 (321.677578ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-690840 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-690840 -n embed-certs-690840
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-690840 -n embed-certs-690840
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (37.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-263919 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2
E0407 13:57:53.287571 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/default-k8s-diff-port-872084/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:57:57.728903 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/old-k8s-version-169187/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-263919 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2: (37.121046286s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (37.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-263919 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-263919 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.508190242s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (9.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-263919 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-263919 --alsologtostderr -v=3: (9.193946037s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (9.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-263919 -n newest-cni-263919
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-263919 -n newest-cni-263919: exit status 7 (102.605436ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-263919 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (22.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-263919 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-263919 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2: (22.287744542s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-263919 -n newest-cni-263919
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (22.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-j2ksz" [c429f216-9c61-4125-80b7-ddd429259f4e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004187185s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-j2ksz" [c429f216-9c61-4125-80b7-ddd429259f4e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003753132s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-474436 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-474436 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-474436 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-474436 --alsologtostderr -v=1: (1.035406622s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-474436 -n no-preload-474436
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-474436 -n no-preload-474436: exit status 2 (488.136883ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-474436 -n no-preload-474436
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-474436 -n no-preload-474436: exit status 2 (448.201669ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-474436 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-474436 -n no-preload-474436
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-474436 -n no-preload-474436
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (87.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-824648 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-824648 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m27.208168309s)
--- PASS: TestNetworkPlugins/group/auto/Start (87.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-263919 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-263919 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-263919 -n newest-cni-263919
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-263919 -n newest-cni-263919: exit status 2 (558.822876ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-263919 -n newest-cni-263919
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-263919 -n newest-cni-263919: exit status 2 (565.262982ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-263919 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p newest-cni-263919 --alsologtostderr -v=1: (1.116476467s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-263919 -n newest-cni-263919
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-263919 -n newest-cni-263919
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.56s)
E0407 14:06:56.601403 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/skaffold-943047/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:06:59.643185 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/custom-flannel-824648/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:06:59.649626 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/custom-flannel-824648/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:06:59.661070 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/custom-flannel-824648/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:06:59.682511 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/custom-flannel-824648/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:06:59.723917 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/custom-flannel-824648/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:06:59.805320 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/custom-flannel-824648/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:06:59.966797 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/custom-flannel-824648/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:07:00.288314 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/custom-flannel-824648/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:07:00.930364 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/custom-flannel-824648/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:07:02.211684 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/custom-flannel-824648/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:07:04.773566 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/custom-flannel-824648/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:07:09.894888 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/custom-flannel-824648/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:07:20.136247 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/custom-flannel-824648/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (73.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-824648 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
E0407 13:59:08.259113 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/addons-378486/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:59:15.209018 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/default-k8s-diff-port-872084/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:00:13.869613 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/old-k8s-version-169187/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-824648 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m13.801742914s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (73.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-px79j" [ed8817a2-236c-45db-9e70-b0a58e2a8ee5] Running
E0407 14:00:17.615843 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/functional-340022/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003742049s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-824648 "pgrep -a kubelet"
I0407 14:00:20.227194 1495026 config.go:182] Loaded profile config "auto-824648": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-824648 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-zgnpx" [1dcf8215-52c5-492d-8b5e-c7adcbe85387] Pending
helpers_test.go:344: "netcat-5d86dc444-zgnpx" [1dcf8215-52c5-492d-8b5e-c7adcbe85387] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.004325158s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-824648 "pgrep -a kubelet"
I0407 14:00:23.067342 1495026 config.go:182] Loaded profile config "kindnet-824648": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-824648 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-z5sz8" [3091676d-fe68-4cd5-a9a8-229911747316] Pending
helpers_test.go:344: "netcat-5d86dc444-z5sz8" [3091676d-fe68-4cd5-a9a8-229911747316] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004158548s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-824648 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-824648 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-824648 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-824648 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-824648 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-824648 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (104.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-824648 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-824648 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m44.959179099s)
--- PASS: TestNetworkPlugins/group/calico/Start (104.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (58.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-824648 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E0407 14:01:31.347637 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/default-k8s-diff-port-872084/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:01:56.602384 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/skaffold-943047/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-824648 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (58.297527794s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (58.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-824648 "pgrep -a kubelet"
E0407 14:01:59.051038 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/default-k8s-diff-port-872084/client.crt: no such file or directory" logger="UnhandledError"
I0407 14:01:59.353598 1495026 config.go:182] Loaded profile config "custom-flannel-824648": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (14.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-824648 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-fhbth" [41a69662-b393-4def-ac30-240a4e56a853] Pending
helpers_test.go:344: "netcat-5d86dc444-fhbth" [41a69662-b393-4def-ac30-240a4e56a853] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-fhbth" [41a69662-b393-4def-ac30-240a4e56a853] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 14.004130454s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (14.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-824648 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-824648 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-824648 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (76.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-824648 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-824648 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m16.234518015s)
--- PASS: TestNetworkPlugins/group/false/Start (76.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-lhk68" [a76d6087-d00f-479f-b737-7d7399b7a4c9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004310548s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-824648 "pgrep -a kubelet"
I0407 14:02:49.698707 1495026 config.go:182] Loaded profile config "calico-824648": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-824648 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-ckxx5" [2a22c1bf-9800-4b0a-8129-f1d8decb827e] Pending
helpers_test.go:344: "netcat-5d86dc444-ckxx5" [2a22c1bf-9800-4b0a-8129-f1d8decb827e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-ckxx5" [2a22c1bf-9800-4b0a-8129-f1d8decb827e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004161779s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-824648 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-824648 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-824648 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (42.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-824648 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E0407 14:03:41.042123 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/no-preload-474436/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:03:41.048452 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/no-preload-474436/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:03:41.059788 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/no-preload-474436/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:03:41.081119 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/no-preload-474436/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:03:41.122520 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/no-preload-474436/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:03:41.203900 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/no-preload-474436/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:03:41.365869 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/no-preload-474436/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:03:41.687197 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/no-preload-474436/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:03:42.328909 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/no-preload-474436/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:03:43.610244 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/no-preload-474436/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:03:46.171604 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/no-preload-474436/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:03:51.293442 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/no-preload-474436/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-824648 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (42.287953333s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (42.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-824648 "pgrep -a kubelet"
I0407 14:03:56.730502 1495026 config.go:182] Loaded profile config "false-824648": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-824648 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-lvwcz" [217ec123-06e5-420c-a414-29e32c8528a0] Pending
E0407 14:04:01.534780 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/no-preload-474436/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-lvwcz" [217ec123-06e5-420c-a414-29e32c8528a0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.002979873s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-824648 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-824648 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0407 14:04:08.258887 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/addons-378486/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-824648 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-824648 "pgrep -a kubelet"
I0407 14:04:11.018846 1495026 config.go:182] Loaded profile config "enable-default-cni-824648": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-824648 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-4j62v" [784bd58a-16b9-404f-abf4-38f908c8c6a7] Pending
helpers_test.go:344: "netcat-5d86dc444-4j62v" [784bd58a-16b9-404f-abf4-38f908c8c6a7] Running
E0407 14:04:22.016925 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/no-preload-474436/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.004131518s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-824648 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-824648 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-824648 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (64.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-824648 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-824648 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m4.876717728s)
--- PASS: TestNetworkPlugins/group/flannel/Start (64.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (83.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-824648 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0407 14:05:02.978666 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/no-preload-474436/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:05:13.869585 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/old-k8s-version-169187/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:05:16.747879 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/kindnet-824648/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:05:16.754197 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/kindnet-824648/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:05:16.765534 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/kindnet-824648/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:05:16.786827 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/kindnet-824648/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:05:16.828194 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/kindnet-824648/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:05:16.909987 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/kindnet-824648/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:05:17.071391 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/kindnet-824648/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:05:17.392839 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/kindnet-824648/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:05:17.615111 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/functional-340022/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:05:18.034523 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/kindnet-824648/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:05:19.316467 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/kindnet-824648/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:05:20.501687 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/auto-824648/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:05:20.507990 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/auto-824648/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:05:20.519306 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/auto-824648/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:05:20.540649 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/auto-824648/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:05:20.582028 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/auto-824648/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:05:20.663369 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/auto-824648/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:05:20.824731 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/auto-824648/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:05:21.146394 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/auto-824648/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:05:21.788135 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/auto-824648/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:05:21.878521 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/kindnet-824648/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:05:23.070152 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/auto-824648/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:05:25.631638 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/auto-824648/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:05:27.000713 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/kindnet-824648/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:05:30.753907 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/auto-824648/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:05:31.331998 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/addons-378486/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-824648 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m23.371233354s)
--- PASS: TestNetworkPlugins/group/bridge/Start (83.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-pnnjx" [2dd313d8-6286-4ae3-a180-6cb48562fec3] Running
E0407 14:05:37.242167 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/kindnet-824648/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003646347s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-824648 "pgrep -a kubelet"
I0407 14:05:40.912313 1495026 config.go:182] Loaded profile config "flannel-824648": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-824648 replace --force -f testdata/netcat-deployment.yaml
E0407 14:05:40.995331 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/auto-824648/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-9vxc7" [d902ff61-183b-498e-baea-74e85e94af9f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-9vxc7" [d902ff61-183b-498e-baea-74e85e94af9f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004407573s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-824648 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-824648 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-824648 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-824648 "pgrep -a kubelet"
I0407 14:06:13.430831 1495026 config.go:182] Loaded profile config "bridge-824648": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-824648 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-2n47x" [f149c95c-7104-43d2-bef8-4366b9ca5e8f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-2n47x" [f149c95c-7104-43d2-bef8-4366b9ca5e8f] Running
E0407 14:06:24.900112 1495026 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/no-preload-474436/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.003344669s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (75.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-824648 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-824648 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m15.986113524s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (75.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-824648 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-824648 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-824648 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-824648 "pgrep -a kubelet"
I0407 14:07:29.811729 1495026 config.go:182] Loaded profile config "kubenet-824648": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-824648 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-c4gn9" [efc39411-29e8-48f1-8a2f-449d8d921b84] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-c4gn9" [efc39411-29e8-48f1-8a2f-449d8d921b84] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.005681923s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-824648 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-824648 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-824648 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                    

Test skip (26/346)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.59s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-600192 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-600192" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-600192
--- SKIP: TestDownloadOnlyKic (0.59s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1804: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-521141" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-521141
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:631: 
----------------------- debugLogs start: cilium-824648 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-824648

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-824648

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-824648

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-824648

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-824648

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-824648

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-824648

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-824648

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-824648

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-824648

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-824648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824648"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-824648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824648"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-824648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824648"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-824648

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-824648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824648"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-824648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824648"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-824648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-824648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-824648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-824648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-824648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-824648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-824648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-824648" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-824648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824648"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-824648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824648"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-824648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824648"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-824648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824648"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-824648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824648"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-824648

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-824648

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-824648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-824648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-824648

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-824648

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-824648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-824648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-824648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-824648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-824648" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-824648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824648"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-824648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824648"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-824648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824648"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-824648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824648"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-824648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824648"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20598-1489638/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 07 Apr 2025 13:40:47 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-212455
contexts:
- context:
cluster: pause-212455
extensions:
- extension:
last-update: Mon, 07 Apr 2025 13:40:47 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: pause-212455
name: pause-212455
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-212455
user:
client-certificate: /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/pause-212455/client.crt
client-key: /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/pause-212455/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-824648

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-824648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824648"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-824648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824648"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-824648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824648"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-824648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824648"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-824648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824648"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-824648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824648"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-824648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824648"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-824648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824648"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-824648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824648"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-824648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824648"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-824648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824648"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-824648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824648"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-824648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824648"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-824648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824648"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-824648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824648"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-824648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824648"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-824648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824648"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-824648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824648"

                                                
                                                
----------------------- debugLogs end: cilium-824648 [took: 5.4346734s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-824648" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-824648
--- SKIP: TestNetworkPlugins/group/cilium (5.65s)

                                                
                                    
Copied to clipboard