Test Report: Docker_Linux_containerd_arm64 20068

                    
                      3e5ae302b6a4bf4af6cc92954bf8488d685fb633:2024-12-09:37406
                    
                

Test fail (1/330)

Order failed test Duration
304 TestStartStop/group/old-k8s-version/serial/SecondStart 373.16
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (373.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-623695 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E1209 11:25:27.567617  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/addons-764596/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-623695 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m8.842301344s)

                                                
                                                
-- stdout --
	* [old-k8s-version-623695] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20068-586689/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-586689/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-623695" primary control-plane node in "old-k8s-version-623695" cluster
	* Pulling base image v0.0.45-1730888964-19917 ...
	* Restarting existing docker container for "old-k8s-version-623695" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	* Verifying Kubernetes components...
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-623695 addons enable metrics-server
	
	* Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 11:25:22.128588  800461 out.go:345] Setting OutFile to fd 1 ...
	I1209 11:25:22.128844  800461 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:25:22.128872  800461 out.go:358] Setting ErrFile to fd 2...
	I1209 11:25:22.128894  800461 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:25:22.129203  800461 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-586689/.minikube/bin
	I1209 11:25:22.129651  800461 out.go:352] Setting JSON to false
	I1209 11:25:22.130728  800461 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14870,"bootTime":1733728653,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1209 11:25:22.130832  800461 start.go:139] virtualization:  
	I1209 11:25:22.134723  800461 out.go:177] * [old-k8s-version-623695] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1209 11:25:22.137187  800461 notify.go:220] Checking for updates...
	I1209 11:25:22.138119  800461 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 11:25:22.140125  800461 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 11:25:22.144142  800461 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20068-586689/kubeconfig
	I1209 11:25:22.146573  800461 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-586689/.minikube
	I1209 11:25:22.148989  800461 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 11:25:22.151127  800461 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 11:25:22.154138  800461 config.go:182] Loaded profile config "old-k8s-version-623695": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1209 11:25:22.156633  800461 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1209 11:25:22.158810  800461 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 11:25:22.192929  800461 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1209 11:25:22.193109  800461 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 11:25:22.281953  800461 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:true NGoroutines:67 SystemTime:2024-12-09 11:25:22.267248146 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1209 11:25:22.282099  800461 docker.go:318] overlay module found
	I1209 11:25:22.284719  800461 out.go:177] * Using the docker driver based on existing profile
	I1209 11:25:22.286823  800461 start.go:297] selected driver: docker
	I1209 11:25:22.286847  800461 start.go:901] validating driver "docker" against &{Name:old-k8s-version-623695 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-623695 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:25:22.286960  800461 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 11:25:22.287670  800461 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 11:25:22.366631  800461 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:true NGoroutines:67 SystemTime:2024-12-09 11:25:22.354752609 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1209 11:25:22.367065  800461 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 11:25:22.367084  800461 cni.go:84] Creating CNI manager for ""
	I1209 11:25:22.367151  800461 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 11:25:22.367210  800461 start.go:340] cluster config:
	{Name:old-k8s-version-623695 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-623695 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:25:22.369677  800461 out.go:177] * Starting "old-k8s-version-623695" primary control-plane node in "old-k8s-version-623695" cluster
	I1209 11:25:22.371827  800461 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1209 11:25:22.374004  800461 out.go:177] * Pulling base image v0.0.45-1730888964-19917 ...
	I1209 11:25:22.376214  800461 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1209 11:25:22.376276  800461 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20068-586689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I1209 11:25:22.376285  800461 cache.go:56] Caching tarball of preloaded images
	I1209 11:25:22.376386  800461 preload.go:172] Found /home/jenkins/minikube-integration/20068-586689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 11:25:22.376396  800461 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I1209 11:25:22.376518  800461 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/old-k8s-version-623695/config.json ...
	I1209 11:25:22.376739  800461 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1209 11:25:22.418696  800461 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon, skipping pull
	I1209 11:25:22.418723  800461 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in daemon, skipping load
	I1209 11:25:22.418739  800461 cache.go:194] Successfully downloaded all kic artifacts
	I1209 11:25:22.418774  800461 start.go:360] acquireMachinesLock for old-k8s-version-623695: {Name:mk30ad5946677ce9584302a554d89e2bca295e92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 11:25:22.418847  800461 start.go:364] duration metric: took 44.161µs to acquireMachinesLock for "old-k8s-version-623695"
	I1209 11:25:22.418876  800461 start.go:96] Skipping create...Using existing machine configuration
	I1209 11:25:22.418887  800461 fix.go:54] fixHost starting: 
	I1209 11:25:22.419173  800461 cli_runner.go:164] Run: docker container inspect old-k8s-version-623695 --format={{.State.Status}}
	I1209 11:25:22.471000  800461 fix.go:112] recreateIfNeeded on old-k8s-version-623695: state=Stopped err=<nil>
	W1209 11:25:22.471035  800461 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 11:25:22.473650  800461 out.go:177] * Restarting existing docker container for "old-k8s-version-623695" ...
	I1209 11:25:22.475782  800461 cli_runner.go:164] Run: docker start old-k8s-version-623695
	I1209 11:25:22.850233  800461 cli_runner.go:164] Run: docker container inspect old-k8s-version-623695 --format={{.State.Status}}
	I1209 11:25:22.877762  800461 kic.go:430] container "old-k8s-version-623695" state is running.
	I1209 11:25:22.878168  800461 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-623695
	I1209 11:25:22.902787  800461 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/old-k8s-version-623695/config.json ...
	I1209 11:25:22.903009  800461 machine.go:93] provisionDockerMachine start ...
	I1209 11:25:22.903071  800461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-623695
	I1209 11:25:22.922309  800461 main.go:141] libmachine: Using SSH client type: native
	I1209 11:25:22.922579  800461 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415f50] 0x418790 <nil>  [] 0s} 127.0.0.1 33802 <nil> <nil>}
	I1209 11:25:22.922589  800461 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 11:25:22.925826  800461 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1209 11:25:26.077094  800461 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-623695
	
	I1209 11:25:26.077206  800461 ubuntu.go:169] provisioning hostname "old-k8s-version-623695"
	I1209 11:25:26.077313  800461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-623695
	I1209 11:25:26.107019  800461 main.go:141] libmachine: Using SSH client type: native
	I1209 11:25:26.107285  800461 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415f50] 0x418790 <nil>  [] 0s} 127.0.0.1 33802 <nil> <nil>}
	I1209 11:25:26.107296  800461 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-623695 && echo "old-k8s-version-623695" | sudo tee /etc/hostname
	I1209 11:25:26.281754  800461 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-623695
	
	I1209 11:25:26.281845  800461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-623695
	I1209 11:25:26.315039  800461 main.go:141] libmachine: Using SSH client type: native
	I1209 11:25:26.315309  800461 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415f50] 0x418790 <nil>  [] 0s} 127.0.0.1 33802 <nil> <nil>}
	I1209 11:25:26.315333  800461 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-623695' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-623695/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-623695' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 11:25:26.465174  800461 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 11:25:26.465203  800461 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20068-586689/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-586689/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-586689/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-586689/.minikube}
	I1209 11:25:26.465235  800461 ubuntu.go:177] setting up certificates
	I1209 11:25:26.465244  800461 provision.go:84] configureAuth start
	I1209 11:25:26.465307  800461 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-623695
	I1209 11:25:26.498741  800461 provision.go:143] copyHostCerts
	I1209 11:25:26.498816  800461 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-586689/.minikube/ca.pem, removing ...
	I1209 11:25:26.498835  800461 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-586689/.minikube/ca.pem
	I1209 11:25:26.498926  800461 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-586689/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-586689/.minikube/ca.pem (1078 bytes)
	I1209 11:25:26.499039  800461 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-586689/.minikube/cert.pem, removing ...
	I1209 11:25:26.499050  800461 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-586689/.minikube/cert.pem
	I1209 11:25:26.499080  800461 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-586689/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-586689/.minikube/cert.pem (1123 bytes)
	I1209 11:25:26.499147  800461 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-586689/.minikube/key.pem, removing ...
	I1209 11:25:26.499157  800461 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-586689/.minikube/key.pem
	I1209 11:25:26.499182  800461 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-586689/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-586689/.minikube/key.pem (1679 bytes)
	I1209 11:25:26.499244  800461 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-586689/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-586689/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-586689/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-623695 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-623695]
	I1209 11:25:26.757800  800461 provision.go:177] copyRemoteCerts
	I1209 11:25:26.757872  800461 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 11:25:26.757923  800461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-623695
	I1209 11:25:26.776079  800461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/old-k8s-version-623695/id_rsa Username:docker}
	I1209 11:25:26.869943  800461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 11:25:26.911148  800461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1209 11:25:26.946806  800461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 11:25:26.992749  800461 provision.go:87] duration metric: took 527.487908ms to configureAuth
	I1209 11:25:26.992774  800461 ubuntu.go:193] setting minikube options for container-runtime
	I1209 11:25:26.992977  800461 config.go:182] Loaded profile config "old-k8s-version-623695": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1209 11:25:26.992984  800461 machine.go:96] duration metric: took 4.089968262s to provisionDockerMachine
	I1209 11:25:26.992992  800461 start.go:293] postStartSetup for "old-k8s-version-623695" (driver="docker")
	I1209 11:25:26.993003  800461 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 11:25:26.993052  800461 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 11:25:26.993096  800461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-623695
	I1209 11:25:27.027430  800461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/old-k8s-version-623695/id_rsa Username:docker}
	I1209 11:25:27.132583  800461 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 11:25:27.136238  800461 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 11:25:27.136278  800461 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1209 11:25:27.136289  800461 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1209 11:25:27.136297  800461 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1209 11:25:27.136308  800461 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-586689/.minikube/addons for local assets ...
	I1209 11:25:27.136365  800461 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-586689/.minikube/files for local assets ...
	I1209 11:25:27.136450  800461 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-586689/.minikube/files/etc/ssl/certs/5920802.pem -> 5920802.pem in /etc/ssl/certs
	I1209 11:25:27.136562  800461 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 11:25:27.150729  800461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/files/etc/ssl/certs/5920802.pem --> /etc/ssl/certs/5920802.pem (1708 bytes)
	I1209 11:25:27.188672  800461 start.go:296] duration metric: took 195.662785ms for postStartSetup
	I1209 11:25:27.188772  800461 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 11:25:27.188818  800461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-623695
	I1209 11:25:27.218791  800461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/old-k8s-version-623695/id_rsa Username:docker}
	I1209 11:25:27.318274  800461 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 11:25:27.323920  800461 fix.go:56] duration metric: took 4.905026182s for fixHost
	I1209 11:25:27.323946  800461 start.go:83] releasing machines lock for "old-k8s-version-623695", held for 4.905086779s
	I1209 11:25:27.324059  800461 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-623695
	I1209 11:25:27.349005  800461 ssh_runner.go:195] Run: cat /version.json
	I1209 11:25:27.349072  800461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-623695
	I1209 11:25:27.349287  800461 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 11:25:27.349352  800461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-623695
	I1209 11:25:27.381331  800461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/old-k8s-version-623695/id_rsa Username:docker}
	I1209 11:25:27.387659  800461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/old-k8s-version-623695/id_rsa Username:docker}
	I1209 11:25:27.626453  800461 ssh_runner.go:195] Run: systemctl --version
	I1209 11:25:27.631278  800461 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1209 11:25:27.641788  800461 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1209 11:25:27.671024  800461 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1209 11:25:27.671173  800461 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 11:25:27.682583  800461 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 11:25:27.682645  800461 start.go:495] detecting cgroup driver to use...
	I1209 11:25:27.682702  800461 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 11:25:27.682775  800461 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1209 11:25:27.705704  800461 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 11:25:27.732685  800461 docker.go:217] disabling cri-docker service (if available) ...
	I1209 11:25:27.732799  800461 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 11:25:27.755963  800461 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 11:25:27.779797  800461 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 11:25:27.924381  800461 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 11:25:28.066666  800461 docker.go:233] disabling docker service ...
	I1209 11:25:28.066786  800461 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 11:25:28.087938  800461 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 11:25:28.102978  800461 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 11:25:28.269735  800461 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 11:25:28.431244  800461 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 11:25:28.447421  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 11:25:28.481183  800461 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1209 11:25:28.496375  800461 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1209 11:25:28.513515  800461 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1209 11:25:28.513591  800461 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1209 11:25:28.527734  800461 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 11:25:28.544122  800461 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1209 11:25:28.558312  800461 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 11:25:28.573775  800461 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 11:25:28.585514  800461 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1209 11:25:28.606645  800461 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 11:25:28.618251  800461 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 11:25:28.632859  800461 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:25:28.772990  800461 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1209 11:25:29.047953  800461 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1209 11:25:29.048026  800461 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1209 11:25:29.067089  800461 start.go:563] Will wait 60s for crictl version
	I1209 11:25:29.067162  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:25:29.077827  800461 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 11:25:29.131109  800461 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I1209 11:25:29.131189  800461 ssh_runner.go:195] Run: containerd --version
	I1209 11:25:29.155889  800461 ssh_runner.go:195] Run: containerd --version
	I1209 11:25:29.185455  800461 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	I1209 11:25:29.187502  800461 cli_runner.go:164] Run: docker network inspect old-k8s-version-623695 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 11:25:29.202801  800461 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1209 11:25:29.207195  800461 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:25:29.219268  800461 kubeadm.go:883] updating cluster {Name:old-k8s-version-623695 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-623695 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 11:25:29.219400  800461 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1209 11:25:29.219469  800461 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:25:29.272172  800461 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 11:25:29.272197  800461 containerd.go:534] Images already preloaded, skipping extraction
	I1209 11:25:29.272258  800461 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:25:29.323035  800461 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 11:25:29.323114  800461 cache_images.go:84] Images are preloaded, skipping loading
	I1209 11:25:29.323159  800461 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.20.0 containerd true true} ...
	I1209 11:25:29.323323  800461 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-623695 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-623695 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 11:25:29.323424  800461 ssh_runner.go:195] Run: sudo crictl info
	I1209 11:25:29.376480  800461 cni.go:84] Creating CNI manager for ""
	I1209 11:25:29.376508  800461 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 11:25:29.376519  800461 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 11:25:29.376540  800461 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-623695 NodeName:old-k8s-version-623695 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1209 11:25:29.376688  800461 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-623695"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 11:25:29.376756  800461 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1209 11:25:29.388893  800461 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 11:25:29.389071  800461 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 11:25:29.399788  800461 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I1209 11:25:29.421347  800461 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 11:25:29.443360  800461 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I1209 11:25:29.467336  800461 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1209 11:25:29.471094  800461 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:25:29.482451  800461 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:25:29.636020  800461 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:25:29.650941  800461 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/old-k8s-version-623695 for IP: 192.168.85.2
	I1209 11:25:29.650966  800461 certs.go:194] generating shared ca certs ...
	I1209 11:25:29.650982  800461 certs.go:226] acquiring lock for ca certs: {Name:mkf9a6796a1bfe0d2ad344a1e9f65da735c51ff9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:25:29.651115  800461 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-586689/.minikube/ca.key
	I1209 11:25:29.651171  800461 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-586689/.minikube/proxy-client-ca.key
	I1209 11:25:29.651184  800461 certs.go:256] generating profile certs ...
	I1209 11:25:29.651275  800461 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/old-k8s-version-623695/client.key
	I1209 11:25:29.651353  800461 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/old-k8s-version-623695/apiserver.key.3b2ad64b
	I1209 11:25:29.651397  800461 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/old-k8s-version-623695/proxy-client.key
	I1209 11:25:29.651515  800461 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-586689/.minikube/certs/592080.pem (1338 bytes)
	W1209 11:25:29.651548  800461 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-586689/.minikube/certs/592080_empty.pem, impossibly tiny 0 bytes
	I1209 11:25:29.651561  800461 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-586689/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 11:25:29.651592  800461 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-586689/.minikube/certs/ca.pem (1078 bytes)
	I1209 11:25:29.651632  800461 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-586689/.minikube/certs/cert.pem (1123 bytes)
	I1209 11:25:29.651659  800461 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-586689/.minikube/certs/key.pem (1679 bytes)
	I1209 11:25:29.651710  800461 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-586689/.minikube/files/etc/ssl/certs/5920802.pem (1708 bytes)
	I1209 11:25:29.652368  800461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 11:25:29.691501  800461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 11:25:29.719791  800461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 11:25:29.748398  800461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 11:25:29.775828  800461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/old-k8s-version-623695/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1209 11:25:29.816141  800461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/old-k8s-version-623695/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 11:25:29.893230  800461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/old-k8s-version-623695/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 11:25:29.929202  800461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/old-k8s-version-623695/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 11:25:29.970424  800461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 11:25:30.076009  800461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/certs/592080.pem --> /usr/share/ca-certificates/592080.pem (1338 bytes)
	I1209 11:25:30.138411  800461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/files/etc/ssl/certs/5920802.pem --> /usr/share/ca-certificates/5920802.pem (1708 bytes)
	I1209 11:25:30.182212  800461 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 11:25:30.217824  800461 ssh_runner.go:195] Run: openssl version
	I1209 11:25:30.224366  800461 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5920802.pem && ln -fs /usr/share/ca-certificates/5920802.pem /etc/ssl/certs/5920802.pem"
	I1209 11:25:30.235595  800461 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5920802.pem
	I1209 11:25:30.240312  800461 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:44 /usr/share/ca-certificates/5920802.pem
	I1209 11:25:30.240402  800461 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5920802.pem
	I1209 11:25:30.250143  800461 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5920802.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 11:25:30.261400  800461 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 11:25:30.272969  800461 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:25:30.277712  800461 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:25:30.277783  800461 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:25:30.285954  800461 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 11:25:30.296418  800461 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/592080.pem && ln -fs /usr/share/ca-certificates/592080.pem /etc/ssl/certs/592080.pem"
	I1209 11:25:30.307580  800461 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/592080.pem
	I1209 11:25:30.311898  800461 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:44 /usr/share/ca-certificates/592080.pem
	I1209 11:25:30.311967  800461 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/592080.pem
	I1209 11:25:30.319712  800461 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/592080.pem /etc/ssl/certs/51391683.0"
	I1209 11:25:30.330403  800461 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 11:25:30.334808  800461 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 11:25:30.346865  800461 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 11:25:30.357888  800461 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 11:25:30.365959  800461 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 11:25:30.375557  800461 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 11:25:30.385627  800461 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 11:25:30.393269  800461 kubeadm.go:392] StartCluster: {Name:old-k8s-version-623695 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-623695 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:25:30.393398  800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1209 11:25:30.393461  800461 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:25:30.446836  800461 cri.go:89] found id: "484100ecf70c93234ce300e5b905734cece0723a625060c5e6f1e45f273ba13d"
	I1209 11:25:30.446865  800461 cri.go:89] found id: "ed42b2a1e21f5bc2262274dce47ab17b0351c70af13f20f07756dca1fc28f478"
	I1209 11:25:30.446872  800461 cri.go:89] found id: "eb174547e077389a60898a71aa5ab17ac0877a9aa4886acc78d7d9454ff6ec44"
	I1209 11:25:30.446876  800461 cri.go:89] found id: "c8dee69e2c3486f5230d08c0860efbe796008eebd4c95c9749003caa1b5e8c95"
	I1209 11:25:30.446879  800461 cri.go:89] found id: "a9ed84f8cfb61c9caff73d1ed13931a69b4186cabc325c3cc1809699aa1be74e"
	I1209 11:25:30.446885  800461 cri.go:89] found id: "25fc7bce15ad22f8de3baaeb0bb7f687bb2b3dc94e08ab8eb20977713310736b"
	I1209 11:25:30.446888  800461 cri.go:89] found id: "0a9c5fc2481f8b0b984d6634dcbb9354b762189dacc2253907afe96468b89d47"
	I1209 11:25:30.446891  800461 cri.go:89] found id: "8f5f2eca7e918147351695f83e907d55498a88bcb6c566f93410605398939265"
	I1209 11:25:30.446894  800461 cri.go:89] found id: "2b8b97c8ef833f12b7f54ccb1f4f9e43ee5e96f62f8bb9ee22ef50e58e3ea81f"
	I1209 11:25:30.446900  800461 cri.go:89] found id: ""
	I1209 11:25:30.446951  800461 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1209 11:25:30.460859  800461 cri.go:116] JSON = null
	W1209 11:25:30.460909  800461 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 9
	I1209 11:25:30.460973  800461 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 11:25:30.472235  800461 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 11:25:30.472259  800461 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 11:25:30.472321  800461 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 11:25:30.488965  800461 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 11:25:30.489577  800461 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-623695" does not appear in /home/jenkins/minikube-integration/20068-586689/kubeconfig
	I1209 11:25:30.489714  800461 kubeconfig.go:62] /home/jenkins/minikube-integration/20068-586689/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-623695" cluster setting kubeconfig missing "old-k8s-version-623695" context setting]
	I1209 11:25:30.490113  800461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-586689/kubeconfig: {Name:mk6f05f318819272b7562cf231de4edaf3cc73af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:25:30.491536  800461 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 11:25:30.505640  800461 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I1209 11:25:30.505680  800461 kubeadm.go:597] duration metric: took 33.414391ms to restartPrimaryControlPlane
	I1209 11:25:30.505691  800461 kubeadm.go:394] duration metric: took 112.43652ms to StartCluster
	I1209 11:25:30.505707  800461 settings.go:142] acquiring lock: {Name:mk7f755871171984acf41c83b87c2df5d7451702 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:25:30.505766  800461 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20068-586689/kubeconfig
	I1209 11:25:30.506466  800461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-586689/kubeconfig: {Name:mk6f05f318819272b7562cf231de4edaf3cc73af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:25:30.506699  800461 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1209 11:25:30.507074  800461 config.go:182] Loaded profile config "old-k8s-version-623695": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1209 11:25:30.507129  800461 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 11:25:30.507215  800461 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-623695"
	I1209 11:25:30.507245  800461 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-623695"
	W1209 11:25:30.507269  800461 addons.go:243] addon storage-provisioner should already be in state true
	I1209 11:25:30.507278  800461 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-623695"
	I1209 11:25:30.507305  800461 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-623695"
	W1209 11:25:30.507312  800461 addons.go:243] addon metrics-server should already be in state true
	I1209 11:25:30.507315  800461 host.go:66] Checking if "old-k8s-version-623695" exists ...
	I1209 11:25:30.507336  800461 host.go:66] Checking if "old-k8s-version-623695" exists ...
	I1209 11:25:30.507778  800461 cli_runner.go:164] Run: docker container inspect old-k8s-version-623695 --format={{.State.Status}}
	I1209 11:25:30.507964  800461 cli_runner.go:164] Run: docker container inspect old-k8s-version-623695 --format={{.State.Status}}
	I1209 11:25:30.512154  800461 out.go:177] * Verifying Kubernetes components...
	I1209 11:25:30.507255  800461 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-623695"
	I1209 11:25:30.512275  800461 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-623695"
	I1209 11:25:30.512635  800461 cli_runner.go:164] Run: docker container inspect old-k8s-version-623695 --format={{.State.Status}}
	I1209 11:25:30.507267  800461 addons.go:69] Setting dashboard=true in profile "old-k8s-version-623695"
	I1209 11:25:30.513350  800461 addons.go:234] Setting addon dashboard=true in "old-k8s-version-623695"
	W1209 11:25:30.513362  800461 addons.go:243] addon dashboard should already be in state true
	I1209 11:25:30.513392  800461 host.go:66] Checking if "old-k8s-version-623695" exists ...
	I1209 11:25:30.513842  800461 cli_runner.go:164] Run: docker container inspect old-k8s-version-623695 --format={{.State.Status}}
	I1209 11:25:30.517204  800461 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:25:30.556124  800461 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1209 11:25:30.561603  800461 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 11:25:30.561630  800461 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 11:25:30.561712  800461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-623695
	I1209 11:25:30.618321  800461 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-623695"
	W1209 11:25:30.618346  800461 addons.go:243] addon default-storageclass should already be in state true
	I1209 11:25:30.618379  800461 host.go:66] Checking if "old-k8s-version-623695" exists ...
	I1209 11:25:30.619190  800461 cli_runner.go:164] Run: docker container inspect old-k8s-version-623695 --format={{.State.Status}}
	I1209 11:25:30.620978  800461 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1209 11:25:30.627469  800461 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:25:30.627596  800461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/old-k8s-version-623695/id_rsa Username:docker}
	I1209 11:25:30.629505  800461 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 11:25:30.629528  800461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 11:25:30.629596  800461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-623695
	I1209 11:25:30.629759  800461 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1209 11:25:30.631840  800461 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1209 11:25:30.631867  800461 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1209 11:25:30.631933  800461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-623695
	I1209 11:25:30.674342  800461 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 11:25:30.674364  800461 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 11:25:30.674431  800461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-623695
	I1209 11:25:30.694711  800461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/old-k8s-version-623695/id_rsa Username:docker}
	I1209 11:25:30.695778  800461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/old-k8s-version-623695/id_rsa Username:docker}
	I1209 11:25:30.735198  800461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/old-k8s-version-623695/id_rsa Username:docker}
	I1209 11:25:30.739230  800461 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:25:30.757179  800461 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-623695" to be "Ready" ...
	I1209 11:25:30.797796  800461 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 11:25:30.797818  800461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1209 11:25:30.819397  800461 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 11:25:30.819469  800461 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 11:25:30.850124  800461 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1209 11:25:30.850150  800461 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1209 11:25:30.856243  800461 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 11:25:30.856280  800461 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 11:25:30.862829  800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 11:25:30.882128  800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 11:25:30.891481  800461 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1209 11:25:30.891507  800461 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1209 11:25:30.913370  800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 11:25:30.978985  800461 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1209 11:25:30.979013  800461 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1209 11:25:31.111240  800461 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1209 11:25:31.111271  800461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1209 11:25:31.229244  800461 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1209 11:25:31.229286  800461 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1209 11:25:31.252380  800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1209 11:25:31.252490  800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:31.252505  800461 retry.go:31] will retry after 312.30868ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1209 11:25:31.252560  800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:31.252573  800461 retry.go:31] will retry after 195.840828ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:31.252428  800461 retry.go:31] will retry after 325.758646ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:31.261241  800461 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1209 11:25:31.261324  800461 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1209 11:25:31.282207  800461 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1209 11:25:31.282248  800461 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1209 11:25:31.302660  800461 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1209 11:25:31.302686  800461 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1209 11:25:31.324912  800461 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 11:25:31.324936  800461 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1209 11:25:31.346250  800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 11:25:31.448674  800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:31.448707  800461 retry.go:31] will retry after 274.574509ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:31.448832  800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1209 11:25:31.546150  800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:31.546185  800461 retry.go:31] will retry after 194.00489ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:31.565464  800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 11:25:31.578858  800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 11:25:31.708671  800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:31.708705  800461 retry.go:31] will retry after 504.690937ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:31.724004  800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 11:25:31.740378  800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1209 11:25:31.766130  800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:31.766213  800461 retry.go:31] will retry after 238.761685ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1209 11:25:31.928841  800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:31.928951  800461 retry.go:31] will retry after 521.929992ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1209 11:25:31.934075  800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:31.934168  800461 retry.go:31] will retry after 737.614843ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:32.008427  800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 11:25:32.097721  800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:32.097753  800461 retry.go:31] will retry after 578.176208ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:32.214190  800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 11:25:32.345770  800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:32.345802  800461 retry.go:31] will retry after 592.33363ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:32.451087  800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 11:25:32.553635  800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:32.553710  800461 retry.go:31] will retry after 291.196951ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:32.672013  800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 11:25:32.676520  800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 11:25:32.758282  800461 node_ready.go:53] error getting node "old-k8s-version-623695": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-623695": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 11:25:32.829287  800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:32.829372  800461 retry.go:31] will retry after 1.210941259s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1209 11:25:32.829444  800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:32.829471  800461 retry.go:31] will retry after 982.212498ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:32.845802  800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 11:25:32.939226  800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 11:25:32.979760  800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:32.979837  800461 retry.go:31] will retry after 783.958882ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1209 11:25:33.078133  800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:33.078232  800461 retry.go:31] will retry after 601.622997ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:33.680139  800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 11:25:33.764177  800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 11:25:33.800730  800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:33.800820  800461 retry.go:31] will retry after 1.216118305s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:33.812059  800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 11:25:33.932049  800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:33.932138  800461 retry.go:31] will retry after 1.428178551s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1209 11:25:34.002518  800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:34.002554  800461 retry.go:31] will retry after 681.832932ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:34.040831  800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1209 11:25:34.155257  800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:34.155340  800461 retry.go:31] will retry after 1.821310198s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:34.685515  800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 11:25:34.764780  800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:34.764814  800461 retry.go:31] will retry after 2.480440108s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:35.017257  800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 11:25:35.124131  800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:35.124165  800461 retry.go:31] will retry after 1.824937625s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:35.258763  800461 node_ready.go:53] error getting node "old-k8s-version-623695": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-623695": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 11:25:35.361228  800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 11:25:35.449129  800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:35.449192  800461 retry.go:31] will retry after 1.536707217s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:35.977231  800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1209 11:25:36.073029  800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:36.073066  800461 retry.go:31] will retry after 1.482200004s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:36.950140  800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 11:25:36.986524  800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 11:25:37.040354  800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:37.040422  800461 retry.go:31] will retry after 2.836538474s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1209 11:25:37.095998  800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:37.096049  800461 retry.go:31] will retry after 3.467911443s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:37.245468  800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 11:25:37.356729  800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:37.356763  800461 retry.go:31] will retry after 3.882487674s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:37.556187  800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1209 11:25:37.666888  800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:37.666924  800461 retry.go:31] will retry after 2.230923411s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 11:25:37.758561  800461 node_ready.go:53] error getting node "old-k8s-version-623695": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-623695": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 11:25:39.878156  800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 11:25:39.898514  800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 11:25:40.564985  800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 11:25:41.240009  800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 11:25:49.609899  800461 node_ready.go:49] node "old-k8s-version-623695" has status "Ready":"True"
	I1209 11:25:49.609925  800461 node_ready.go:38] duration metric: took 18.852653029s for node "old-k8s-version-623695" to be "Ready" ...
	I1209 11:25:49.609935  800461 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:25:49.846557  800461 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-pll5n" in "kube-system" namespace to be "Ready" ...
	I1209 11:25:49.943548  800461 pod_ready.go:93] pod "coredns-74ff55c5b-pll5n" in "kube-system" namespace has status "Ready":"True"
	I1209 11:25:49.943626  800461 pod_ready.go:82] duration metric: took 96.984304ms for pod "coredns-74ff55c5b-pll5n" in "kube-system" namespace to be "Ready" ...
	I1209 11:25:49.943686  800461 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-623695" in "kube-system" namespace to be "Ready" ...
	I1209 11:25:50.002789  800461 pod_ready.go:93] pod "etcd-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"True"
	I1209 11:25:50.002872  800461 pod_ready.go:82] duration metric: took 59.164551ms for pod "etcd-old-k8s-version-623695" in "kube-system" namespace to be "Ready" ...
	I1209 11:25:50.002904  800461 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-623695" in "kube-system" namespace to be "Ready" ...
	I1209 11:25:50.070711  800461 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"True"
	I1209 11:25:50.070788  800461 pod_ready.go:82] duration metric: took 67.863665ms for pod "kube-apiserver-old-k8s-version-623695" in "kube-system" namespace to be "Ready" ...
	I1209 11:25:50.070816  800461 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-623695" in "kube-system" namespace to be "Ready" ...
	I1209 11:25:50.588295  800461 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"True"
	I1209 11:25:50.588372  800461 pod_ready.go:82] duration metric: took 517.535912ms for pod "kube-controller-manager-old-k8s-version-623695" in "kube-system" namespace to be "Ready" ...
	I1209 11:25:50.588400  800461 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nftmg" in "kube-system" namespace to be "Ready" ...
	I1209 11:25:50.603440  800461 pod_ready.go:93] pod "kube-proxy-nftmg" in "kube-system" namespace has status "Ready":"True"
	I1209 11:25:50.603514  800461 pod_ready.go:82] duration metric: took 15.08574ms for pod "kube-proxy-nftmg" in "kube-system" namespace to be "Ready" ...
	I1209 11:25:50.603542  800461 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace to be "Ready" ...
	I1209 11:25:50.877395  800461 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (10.999192521s)
	I1209 11:25:51.026372  800461 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.127774085s)
	I1209 11:25:51.026468  800461 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-623695"
	I1209 11:25:51.202494  800461 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.637440809s)
	I1209 11:25:51.202834  800461 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.962791314s)
	I1209 11:25:51.205812  800461 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-623695 addons enable metrics-server
	
	I1209 11:25:51.208913  800461 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	I1209 11:25:51.212051  800461 addons.go:510] duration metric: took 20.704913758s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
	I1209 11:25:52.625412  800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
	I1209 11:25:55.110880  800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
	I1209 11:25:57.610316  800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
	I1209 11:25:59.611257  800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
	I1209 11:26:02.111059  800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
	I1209 11:26:04.111134  800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
	I1209 11:26:06.610287  800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
	I1209 11:26:08.616901  800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
	I1209 11:26:11.114544  800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
	I1209 11:26:13.611631  800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
	I1209 11:26:16.110572  800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
	I1209 11:26:18.110934  800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
	I1209 11:26:20.111935  800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
	I1209 11:26:22.610845  800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
	I1209 11:26:25.110954  800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
	I1209 11:26:27.610279  800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
	I1209 11:26:29.611193  800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
	I1209 11:26:31.611303  800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
	I1209 11:26:34.110581  800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
	I1209 11:26:36.111338  800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
	I1209 11:26:38.120073  800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
	I1209 11:26:40.610297  800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
	I1209 11:26:42.614550  800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
	I1209 11:26:45.114891  800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
	I1209 11:26:47.610163  800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
	I1209 11:26:50.112722  800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
	I1209 11:26:52.615239  800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
	I1209 11:26:55.111837  800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
	I1209 11:26:57.610113  800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
	I1209 11:26:59.610695  800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
	I1209 11:27:02.111921  800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
	I1209 11:27:04.611135  800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
	I1209 11:27:06.611100  800461 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"True"
	I1209 11:27:06.611126  800461 pod_ready.go:82] duration metric: took 1m16.007564198s for pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace to be "Ready" ...
	I1209 11:27:06.611139  800461 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace to be "Ready" ...
	I1209 11:27:08.617505  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:27:10.617834  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:27:12.619601  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:27:15.144934  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:27:17.617034  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:27:19.617599  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:27:21.618035  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:27:23.619227  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:27:25.627097  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:27:28.118915  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:27:30.142388  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:27:32.619085  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:27:35.118726  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:27:37.122103  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:27:39.619385  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:27:42.119266  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:27:44.617266  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:27:46.618426  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:27:49.117806  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:27:51.118538  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:27:53.617617  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:27:55.617834  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:27:57.617899  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:27:59.619974  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:28:02.117761  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:28:04.119300  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:28:06.618390  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:28:09.118381  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:28:11.622952  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:28:14.117687  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:28:16.121672  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:28:18.618426  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:28:21.117958  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:28:23.118578  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:28:25.130617  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:28:27.618538  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:28:29.618910  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:28:32.118785  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:28:34.618500  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:28:37.118559  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:28:39.119055  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:28:41.618505  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:28:44.117630  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:28:46.118112  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:28:48.118655  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:28:50.617627  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:28:53.117901  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:28:55.118792  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:28:57.619062  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:29:00.183641  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:29:02.618152  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:29:05.118980  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:29:07.119245  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:29:09.618809  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:29:12.118459  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:29:14.119337  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:29:16.617934  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:29:19.118593  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:29:21.617424  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:29:23.617712  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:29:25.617986  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:29:27.618915  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:29:29.619981  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:29:32.118472  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:29:34.119167  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:29:36.617888  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:29:39.118642  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:29:41.202232  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:29:43.618179  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:29:46.118172  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:29:48.118656  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:29:50.618674  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:29:53.116882  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:29:55.117748  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:29:57.618147  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:29:59.619373  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:30:01.619430  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:30:04.117668  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:30:06.119115  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:30:08.617867  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:30:11.118821  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:30:13.617499  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:30:15.619669  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:30:18.117622  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:30:20.119694  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:30:22.618811  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:30:25.118847  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:30:27.617398  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:30:29.619912  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:30:31.685885  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:30:34.117668  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:30:36.118114  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:30:38.119922  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:30:40.618478  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:30:43.117903  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:30:45.130750  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:30:47.618316  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:30:49.625165  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:30:52.117848  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:30:54.118545  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:30:56.118884  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:30:58.617815  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:31:01.118929  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:31:03.617942  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:31:05.618817  800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
	I1209 11:31:06.619335  800461 pod_ready.go:82] duration metric: took 4m0.008180545s for pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace to be "Ready" ...
	E1209 11:31:06.619362  800461 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1209 11:31:06.619374  800461 pod_ready.go:39] duration metric: took 5m17.00942816s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:31:06.619391  800461 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:31:06.619429  800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:31:06.619498  800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:31:06.665588  800461 cri.go:89] found id: "92b50938c8b97d22b97b90d7211713e83c2e408b0a5bfa70a5b801a11893dfe2"
	I1209 11:31:06.665615  800461 cri.go:89] found id: "8f5f2eca7e918147351695f83e907d55498a88bcb6c566f93410605398939265"
	I1209 11:31:06.665621  800461 cri.go:89] found id: ""
	I1209 11:31:06.665629  800461 logs.go:282] 2 containers: [92b50938c8b97d22b97b90d7211713e83c2e408b0a5bfa70a5b801a11893dfe2 8f5f2eca7e918147351695f83e907d55498a88bcb6c566f93410605398939265]
	I1209 11:31:06.665689  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:06.669660  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:06.674395  800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 11:31:06.674471  800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:31:06.717779  800461 cri.go:89] found id: "c841cccf0a5bdd1fe2fca556e5e36b880a686f41d9146ef7788f247bce61a9d5"
	I1209 11:31:06.717802  800461 cri.go:89] found id: "2b8b97c8ef833f12b7f54ccb1f4f9e43ee5e96f62f8bb9ee22ef50e58e3ea81f"
	I1209 11:31:06.717807  800461 cri.go:89] found id: ""
	I1209 11:31:06.717815  800461 logs.go:282] 2 containers: [c841cccf0a5bdd1fe2fca556e5e36b880a686f41d9146ef7788f247bce61a9d5 2b8b97c8ef833f12b7f54ccb1f4f9e43ee5e96f62f8bb9ee22ef50e58e3ea81f]
	I1209 11:31:06.717876  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:06.721820  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:06.725891  800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 11:31:06.725964  800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:31:06.779560  800461 cri.go:89] found id: "af856b017afebc2eff0a52c30b40b9ae7d18fbed9646a2cd62795c30eae0a468"
	I1209 11:31:06.779586  800461 cri.go:89] found id: "ed42b2a1e21f5bc2262274dce47ab17b0351c70af13f20f07756dca1fc28f478"
	I1209 11:31:06.779592  800461 cri.go:89] found id: ""
	I1209 11:31:06.779600  800461 logs.go:282] 2 containers: [af856b017afebc2eff0a52c30b40b9ae7d18fbed9646a2cd62795c30eae0a468 ed42b2a1e21f5bc2262274dce47ab17b0351c70af13f20f07756dca1fc28f478]
	I1209 11:31:06.779663  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:06.783828  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:06.787756  800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:31:06.787834  800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:31:06.846135  800461 cri.go:89] found id: "ae41a05fd4b1117afac3aa1f7764574fd0ed6427f9d72dc4841a9f2d948eeec3"
	I1209 11:31:06.846161  800461 cri.go:89] found id: "0a9c5fc2481f8b0b984d6634dcbb9354b762189dacc2253907afe96468b89d47"
	I1209 11:31:06.846166  800461 cri.go:89] found id: ""
	I1209 11:31:06.846174  800461 logs.go:282] 2 containers: [ae41a05fd4b1117afac3aa1f7764574fd0ed6427f9d72dc4841a9f2d948eeec3 0a9c5fc2481f8b0b984d6634dcbb9354b762189dacc2253907afe96468b89d47]
	I1209 11:31:06.846237  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:06.850232  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:06.854393  800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:31:06.854467  800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:31:06.901758  800461 cri.go:89] found id: "167b84f8f987c313cf51f0bcb5bd6488b6ad7b450cd789460a00b513af86bb98"
	I1209 11:31:06.901826  800461 cri.go:89] found id: "a9ed84f8cfb61c9caff73d1ed13931a69b4186cabc325c3cc1809699aa1be74e"
	I1209 11:31:06.901845  800461 cri.go:89] found id: ""
	I1209 11:31:06.901870  800461 logs.go:282] 2 containers: [167b84f8f987c313cf51f0bcb5bd6488b6ad7b450cd789460a00b513af86bb98 a9ed84f8cfb61c9caff73d1ed13931a69b4186cabc325c3cc1809699aa1be74e]
	I1209 11:31:06.901962  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:06.906130  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:06.909943  800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:31:06.910032  800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:31:06.952413  800461 cri.go:89] found id: "8afdcb5ba074eac7202e301ff2b67765f72a8dd4e47a0100cf89910df022caaa"
	I1209 11:31:06.952495  800461 cri.go:89] found id: "25fc7bce15ad22f8de3baaeb0bb7f687bb2b3dc94e08ab8eb20977713310736b"
	I1209 11:31:06.952515  800461 cri.go:89] found id: ""
	I1209 11:31:06.952538  800461 logs.go:282] 2 containers: [8afdcb5ba074eac7202e301ff2b67765f72a8dd4e47a0100cf89910df022caaa 25fc7bce15ad22f8de3baaeb0bb7f687bb2b3dc94e08ab8eb20977713310736b]
	I1209 11:31:06.952627  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:06.956769  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:06.960982  800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 11:31:06.961110  800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:31:07.003555  800461 cri.go:89] found id: "91cb2ed43dfec514bb35f6e6633f1b7bc69390d1eb0c98d7fd830339fc56e9e3"
	I1209 11:31:07.003582  800461 cri.go:89] found id: "eb174547e077389a60898a71aa5ab17ac0877a9aa4886acc78d7d9454ff6ec44"
	I1209 11:31:07.003587  800461 cri.go:89] found id: ""
	I1209 11:31:07.003595  800461 logs.go:282] 2 containers: [91cb2ed43dfec514bb35f6e6633f1b7bc69390d1eb0c98d7fd830339fc56e9e3 eb174547e077389a60898a71aa5ab17ac0877a9aa4886acc78d7d9454ff6ec44]
	I1209 11:31:07.003768  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:07.010306  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:07.014583  800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:31:07.014718  800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:31:07.060225  800461 cri.go:89] found id: "b5b12d4047e8983ca175dce7ad530195e42f6c97736f3423b56ba4b67eee4cfd"
	I1209 11:31:07.060256  800461 cri.go:89] found id: ""
	I1209 11:31:07.060264  800461 logs.go:282] 1 containers: [b5b12d4047e8983ca175dce7ad530195e42f6c97736f3423b56ba4b67eee4cfd]
	I1209 11:31:07.060332  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:07.064188  800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 11:31:07.064258  800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 11:31:07.115430  800461 cri.go:89] found id: "663485e6313972a7cfcbedd678323ecb2f884257c37073d5b2e988fad2ae5465"
	I1209 11:31:07.115452  800461 cri.go:89] found id: "1c864e51ed369180554c763558a548ef00ea435a92cc98a093c1261e29cf2bbf"
	I1209 11:31:07.115457  800461 cri.go:89] found id: ""
	I1209 11:31:07.115465  800461 logs.go:282] 2 containers: [663485e6313972a7cfcbedd678323ecb2f884257c37073d5b2e988fad2ae5465 1c864e51ed369180554c763558a548ef00ea435a92cc98a093c1261e29cf2bbf]
	I1209 11:31:07.115529  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:07.119609  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:07.123680  800461 logs.go:123] Gathering logs for coredns [ed42b2a1e21f5bc2262274dce47ab17b0351c70af13f20f07756dca1fc28f478] ...
	I1209 11:31:07.123708  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed42b2a1e21f5bc2262274dce47ab17b0351c70af13f20f07756dca1fc28f478"
	I1209 11:31:07.174110  800461 logs.go:123] Gathering logs for kube-controller-manager [25fc7bce15ad22f8de3baaeb0bb7f687bb2b3dc94e08ab8eb20977713310736b] ...
	I1209 11:31:07.174139  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25fc7bce15ad22f8de3baaeb0bb7f687bb2b3dc94e08ab8eb20977713310736b"
	I1209 11:31:07.253927  800461 logs.go:123] Gathering logs for storage-provisioner [1c864e51ed369180554c763558a548ef00ea435a92cc98a093c1261e29cf2bbf] ...
	I1209 11:31:07.253963  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c864e51ed369180554c763558a548ef00ea435a92cc98a093c1261e29cf2bbf"
	I1209 11:31:07.297941  800461 logs.go:123] Gathering logs for containerd ...
	I1209 11:31:07.297970  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 11:31:07.363015  800461 logs.go:123] Gathering logs for container status ...
	I1209 11:31:07.363062  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:31:07.415908  800461 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:31:07.415938  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 11:31:07.562065  800461 logs.go:123] Gathering logs for kube-apiserver [92b50938c8b97d22b97b90d7211713e83c2e408b0a5bfa70a5b801a11893dfe2] ...
	I1209 11:31:07.562098  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92b50938c8b97d22b97b90d7211713e83c2e408b0a5bfa70a5b801a11893dfe2"
	I1209 11:31:07.628523  800461 logs.go:123] Gathering logs for coredns [af856b017afebc2eff0a52c30b40b9ae7d18fbed9646a2cd62795c30eae0a468] ...
	I1209 11:31:07.628564  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af856b017afebc2eff0a52c30b40b9ae7d18fbed9646a2cd62795c30eae0a468"
	I1209 11:31:07.676860  800461 logs.go:123] Gathering logs for kube-scheduler [0a9c5fc2481f8b0b984d6634dcbb9354b762189dacc2253907afe96468b89d47] ...
	I1209 11:31:07.676891  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a9c5fc2481f8b0b984d6634dcbb9354b762189dacc2253907afe96468b89d47"
	I1209 11:31:07.723056  800461 logs.go:123] Gathering logs for kindnet [eb174547e077389a60898a71aa5ab17ac0877a9aa4886acc78d7d9454ff6ec44] ...
	I1209 11:31:07.723091  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb174547e077389a60898a71aa5ab17ac0877a9aa4886acc78d7d9454ff6ec44"
	I1209 11:31:07.780619  800461 logs.go:123] Gathering logs for kubernetes-dashboard [b5b12d4047e8983ca175dce7ad530195e42f6c97736f3423b56ba4b67eee4cfd] ...
	I1209 11:31:07.780653  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5b12d4047e8983ca175dce7ad530195e42f6c97736f3423b56ba4b67eee4cfd"
	I1209 11:31:07.823020  800461 logs.go:123] Gathering logs for kubelet ...
	I1209 11:31:07.823047  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 11:31:07.878389  800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.528000     663 reflector.go:138] object-"kube-system"/"coredns-token-b78rj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-b78rj" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-623695' and this object
	W1209 11:31:07.878649  800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.528077     663 reflector.go:138] object-"kube-system"/"kindnet-token-nl827": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-nl827" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-623695' and this object
	W1209 11:31:07.878882  800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.532699     663 reflector.go:138] object-"kube-system"/"storage-provisioner-token-sw5w9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-sw5w9" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-623695' and this object
	W1209 11:31:07.879083  800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.532801     663 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-623695' and this object
	W1209 11:31:07.879293  800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.532864     663 reflector.go:138] object-"default"/"default-token-pgtqr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-pgtqr" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-623695' and this object
	W1209 11:31:07.879510  800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.532917     663 reflector.go:138] object-"kube-system"/"kube-proxy-token-tnwqj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-tnwqj" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-623695' and this object
	W1209 11:31:07.879733  800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.532965     663 reflector.go:138] object-"kube-system"/"metrics-server-token-hcpl8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-hcpl8" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-623695' and this object
	W1209 11:31:07.879941  800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.533017     663 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-623695' and this object
	W1209 11:31:07.890312  800461 logs.go:138] Found kubelet problem: Dec 09 11:25:51 old-k8s-version-623695 kubelet[663]: E1209 11:25:51.720038     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1209 11:31:07.890526  800461 logs.go:138] Found kubelet problem: Dec 09 11:25:51 old-k8s-version-623695 kubelet[663]: E1209 11:25:51.747865     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:07.893612  800461 logs.go:138] Found kubelet problem: Dec 09 11:26:03 old-k8s-version-623695 kubelet[663]: E1209 11:26:03.558736     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1209 11:31:07.895693  800461 logs.go:138] Found kubelet problem: Dec 09 11:26:14 old-k8s-version-623695 kubelet[663]: E1209 11:26:14.883179     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:07.895879  800461 logs.go:138] Found kubelet problem: Dec 09 11:26:15 old-k8s-version-623695 kubelet[663]: E1209 11:26:15.549936     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:07.896212  800461 logs.go:138] Found kubelet problem: Dec 09 11:26:15 old-k8s-version-623695 kubelet[663]: E1209 11:26:15.890882     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:07.896874  800461 logs.go:138] Found kubelet problem: Dec 09 11:26:19 old-k8s-version-623695 kubelet[663]: E1209 11:26:19.928539     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:07.897362  800461 logs.go:138] Found kubelet problem: Dec 09 11:26:21 old-k8s-version-623695 kubelet[663]: E1209 11:26:21.930997     663 pod_workers.go:191] Error syncing pod a4b9e510-c334-4949-a8ad-1f3f41854e03 ("storage-provisioner_kube-system(a4b9e510-c334-4949-a8ad-1f3f41854e03)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a4b9e510-c334-4949-a8ad-1f3f41854e03)"
	W1209 11:31:07.899800  800461 logs.go:138] Found kubelet problem: Dec 09 11:26:28 old-k8s-version-623695 kubelet[663]: E1209 11:26:28.556809     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1209 11:31:07.900858  800461 logs.go:138] Found kubelet problem: Dec 09 11:26:33 old-k8s-version-623695 kubelet[663]: E1209 11:26:33.968067     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:07.901194  800461 logs.go:138] Found kubelet problem: Dec 09 11:26:39 old-k8s-version-623695 kubelet[663]: E1209 11:26:39.927886     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:07.901381  800461 logs.go:138] Found kubelet problem: Dec 09 11:26:42 old-k8s-version-623695 kubelet[663]: E1209 11:26:42.551170     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:07.901727  800461 logs.go:138] Found kubelet problem: Dec 09 11:26:52 old-k8s-version-623695 kubelet[663]: E1209 11:26:52.546429     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:07.901913  800461 logs.go:138] Found kubelet problem: Dec 09 11:26:53 old-k8s-version-623695 kubelet[663]: E1209 11:26:53.546792     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:07.902096  800461 logs.go:138] Found kubelet problem: Dec 09 11:27:04 old-k8s-version-623695 kubelet[663]: E1209 11:27:04.547503     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:07.902691  800461 logs.go:138] Found kubelet problem: Dec 09 11:27:06 old-k8s-version-623695 kubelet[663]: E1209 11:27:06.138491     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:07.903020  800461 logs.go:138] Found kubelet problem: Dec 09 11:27:09 old-k8s-version-623695 kubelet[663]: E1209 11:27:09.927608     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:07.905540  800461 logs.go:138] Found kubelet problem: Dec 09 11:27:15 old-k8s-version-623695 kubelet[663]: E1209 11:27:15.558173     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1209 11:31:07.905873  800461 logs.go:138] Found kubelet problem: Dec 09 11:27:22 old-k8s-version-623695 kubelet[663]: E1209 11:27:22.550284     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:07.906057  800461 logs.go:138] Found kubelet problem: Dec 09 11:27:29 old-k8s-version-623695 kubelet[663]: E1209 11:27:29.559056     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:07.906385  800461 logs.go:138] Found kubelet problem: Dec 09 11:27:35 old-k8s-version-623695 kubelet[663]: E1209 11:27:35.546799     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:07.906569  800461 logs.go:138] Found kubelet problem: Dec 09 11:27:43 old-k8s-version-623695 kubelet[663]: E1209 11:27:43.546652     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:07.907158  800461 logs.go:138] Found kubelet problem: Dec 09 11:27:47 old-k8s-version-623695 kubelet[663]: E1209 11:27:47.281093     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:07.907490  800461 logs.go:138] Found kubelet problem: Dec 09 11:27:49 old-k8s-version-623695 kubelet[663]: E1209 11:27:49.927704     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:07.907675  800461 logs.go:138] Found kubelet problem: Dec 09 11:27:55 old-k8s-version-623695 kubelet[663]: E1209 11:27:55.546651     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:07.908004  800461 logs.go:138] Found kubelet problem: Dec 09 11:28:03 old-k8s-version-623695 kubelet[663]: E1209 11:28:03.546208     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:07.908219  800461 logs.go:138] Found kubelet problem: Dec 09 11:28:10 old-k8s-version-623695 kubelet[663]: E1209 11:28:10.546626     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:07.908550  800461 logs.go:138] Found kubelet problem: Dec 09 11:28:15 old-k8s-version-623695 kubelet[663]: E1209 11:28:15.546835     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:07.908738  800461 logs.go:138] Found kubelet problem: Dec 09 11:28:23 old-k8s-version-623695 kubelet[663]: E1209 11:28:23.546586     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:07.909067  800461 logs.go:138] Found kubelet problem: Dec 09 11:28:26 old-k8s-version-623695 kubelet[663]: E1209 11:28:26.546890     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:07.911535  800461 logs.go:138] Found kubelet problem: Dec 09 11:28:38 old-k8s-version-623695 kubelet[663]: E1209 11:28:38.563060     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1209 11:31:07.911870  800461 logs.go:138] Found kubelet problem: Dec 09 11:28:40 old-k8s-version-623695 kubelet[663]: E1209 11:28:40.546787     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:07.912055  800461 logs.go:138] Found kubelet problem: Dec 09 11:28:50 old-k8s-version-623695 kubelet[663]: E1209 11:28:50.546870     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:07.912386  800461 logs.go:138] Found kubelet problem: Dec 09 11:28:55 old-k8s-version-623695 kubelet[663]: E1209 11:28:55.546234     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:07.912570  800461 logs.go:138] Found kubelet problem: Dec 09 11:29:03 old-k8s-version-623695 kubelet[663]: E1209 11:29:03.546823     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:07.913179  800461 logs.go:138] Found kubelet problem: Dec 09 11:29:10 old-k8s-version-623695 kubelet[663]: E1209 11:29:10.508888     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:07.913366  800461 logs.go:138] Found kubelet problem: Dec 09 11:29:17 old-k8s-version-623695 kubelet[663]: E1209 11:29:17.546618     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:07.913697  800461 logs.go:138] Found kubelet problem: Dec 09 11:29:19 old-k8s-version-623695 kubelet[663]: E1209 11:29:19.928082     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:07.913886  800461 logs.go:138] Found kubelet problem: Dec 09 11:29:28 old-k8s-version-623695 kubelet[663]: E1209 11:29:28.547234     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:07.914214  800461 logs.go:138] Found kubelet problem: Dec 09 11:29:31 old-k8s-version-623695 kubelet[663]: E1209 11:29:31.546227     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:07.914401  800461 logs.go:138] Found kubelet problem: Dec 09 11:29:41 old-k8s-version-623695 kubelet[663]: E1209 11:29:41.546721     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:07.914730  800461 logs.go:138] Found kubelet problem: Dec 09 11:29:43 old-k8s-version-623695 kubelet[663]: E1209 11:29:43.546444     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:07.915057  800461 logs.go:138] Found kubelet problem: Dec 09 11:29:56 old-k8s-version-623695 kubelet[663]: E1209 11:29:56.547387     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:07.915242  800461 logs.go:138] Found kubelet problem: Dec 09 11:29:56 old-k8s-version-623695 kubelet[663]: E1209 11:29:56.548186     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:07.915570  800461 logs.go:138] Found kubelet problem: Dec 09 11:30:09 old-k8s-version-623695 kubelet[663]: E1209 11:30:09.546239     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:07.915756  800461 logs.go:138] Found kubelet problem: Dec 09 11:30:11 old-k8s-version-623695 kubelet[663]: E1209 11:30:11.546522     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:07.916089  800461 logs.go:138] Found kubelet problem: Dec 09 11:30:20 old-k8s-version-623695 kubelet[663]: E1209 11:30:20.547174     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:07.916274  800461 logs.go:138] Found kubelet problem: Dec 09 11:30:26 old-k8s-version-623695 kubelet[663]: E1209 11:30:26.547093     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:07.916601  800461 logs.go:138] Found kubelet problem: Dec 09 11:30:33 old-k8s-version-623695 kubelet[663]: E1209 11:30:33.546231     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:07.916786  800461 logs.go:138] Found kubelet problem: Dec 09 11:30:39 old-k8s-version-623695 kubelet[663]: E1209 11:30:39.546660     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:07.917118  800461 logs.go:138] Found kubelet problem: Dec 09 11:30:44 old-k8s-version-623695 kubelet[663]: E1209 11:30:44.548327     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:07.917310  800461 logs.go:138] Found kubelet problem: Dec 09 11:30:50 old-k8s-version-623695 kubelet[663]: E1209 11:30:50.546557     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:07.917646  800461 logs.go:138] Found kubelet problem: Dec 09 11:30:56 old-k8s-version-623695 kubelet[663]: E1209 11:30:56.546828     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:07.917830  800461 logs.go:138] Found kubelet problem: Dec 09 11:31:05 old-k8s-version-623695 kubelet[663]: E1209 11:31:05.546612     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:07.918159  800461 logs.go:138] Found kubelet problem: Dec 09 11:31:07 old-k8s-version-623695 kubelet[663]: E1209 11:31:07.547033     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	I1209 11:31:07.918170  800461 logs.go:123] Gathering logs for dmesg ...
	I1209 11:31:07.918185  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:31:07.942237  800461 logs.go:123] Gathering logs for kube-apiserver [8f5f2eca7e918147351695f83e907d55498a88bcb6c566f93410605398939265] ...
	I1209 11:31:07.942269  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f5f2eca7e918147351695f83e907d55498a88bcb6c566f93410605398939265"
	I1209 11:31:08.010303  800461 logs.go:123] Gathering logs for etcd [2b8b97c8ef833f12b7f54ccb1f4f9e43ee5e96f62f8bb9ee22ef50e58e3ea81f] ...
	I1209 11:31:08.010402  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b8b97c8ef833f12b7f54ccb1f4f9e43ee5e96f62f8bb9ee22ef50e58e3ea81f"
	I1209 11:31:08.069628  800461 logs.go:123] Gathering logs for kube-scheduler [ae41a05fd4b1117afac3aa1f7764574fd0ed6427f9d72dc4841a9f2d948eeec3] ...
	I1209 11:31:08.069669  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae41a05fd4b1117afac3aa1f7764574fd0ed6427f9d72dc4841a9f2d948eeec3"
	I1209 11:31:08.117522  800461 logs.go:123] Gathering logs for kube-proxy [167b84f8f987c313cf51f0bcb5bd6488b6ad7b450cd789460a00b513af86bb98] ...
	I1209 11:31:08.117559  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 167b84f8f987c313cf51f0bcb5bd6488b6ad7b450cd789460a00b513af86bb98"
	I1209 11:31:08.160325  800461 logs.go:123] Gathering logs for storage-provisioner [663485e6313972a7cfcbedd678323ecb2f884257c37073d5b2e988fad2ae5465] ...
	I1209 11:31:08.160413  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 663485e6313972a7cfcbedd678323ecb2f884257c37073d5b2e988fad2ae5465"
	I1209 11:31:08.241337  800461 logs.go:123] Gathering logs for etcd [c841cccf0a5bdd1fe2fca556e5e36b880a686f41d9146ef7788f247bce61a9d5] ...
	I1209 11:31:08.241368  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c841cccf0a5bdd1fe2fca556e5e36b880a686f41d9146ef7788f247bce61a9d5"
	I1209 11:31:08.299507  800461 logs.go:123] Gathering logs for kube-proxy [a9ed84f8cfb61c9caff73d1ed13931a69b4186cabc325c3cc1809699aa1be74e] ...
	I1209 11:31:08.299537  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9ed84f8cfb61c9caff73d1ed13931a69b4186cabc325c3cc1809699aa1be74e"
	I1209 11:31:08.349466  800461 logs.go:123] Gathering logs for kube-controller-manager [8afdcb5ba074eac7202e301ff2b67765f72a8dd4e47a0100cf89910df022caaa] ...
	I1209 11:31:08.349574  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8afdcb5ba074eac7202e301ff2b67765f72a8dd4e47a0100cf89910df022caaa"
	I1209 11:31:08.415926  800461 logs.go:123] Gathering logs for kindnet [91cb2ed43dfec514bb35f6e6633f1b7bc69390d1eb0c98d7fd830339fc56e9e3] ...
	I1209 11:31:08.415965  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91cb2ed43dfec514bb35f6e6633f1b7bc69390d1eb0c98d7fd830339fc56e9e3"
	I1209 11:31:08.486089  800461 out.go:358] Setting ErrFile to fd 2...
	I1209 11:31:08.486164  800461 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 11:31:08.486253  800461 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1209 11:31:08.486291  800461 out.go:270]   Dec 09 11:30:44 old-k8s-version-623695 kubelet[663]: E1209 11:30:44.548327     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	  Dec 09 11:30:44 old-k8s-version-623695 kubelet[663]: E1209 11:30:44.548327     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:08.486335  800461 out.go:270]   Dec 09 11:30:50 old-k8s-version-623695 kubelet[663]: E1209 11:30:50.546557     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Dec 09 11:30:50 old-k8s-version-623695 kubelet[663]: E1209 11:30:50.546557     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:08.486367  800461 out.go:270]   Dec 09 11:30:56 old-k8s-version-623695 kubelet[663]: E1209 11:30:56.546828     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	  Dec 09 11:30:56 old-k8s-version-623695 kubelet[663]: E1209 11:30:56.546828     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:08.486408  800461 out.go:270]   Dec 09 11:31:05 old-k8s-version-623695 kubelet[663]: E1209 11:31:05.546612     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Dec 09 11:31:05 old-k8s-version-623695 kubelet[663]: E1209 11:31:05.546612     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:08.486473  800461 out.go:270]   Dec 09 11:31:07 old-k8s-version-623695 kubelet[663]: E1209 11:31:07.547033     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	  Dec 09 11:31:07 old-k8s-version-623695 kubelet[663]: E1209 11:31:07.547033     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	I1209 11:31:08.486487  800461 out.go:358] Setting ErrFile to fd 2...
	I1209 11:31:08.486493  800461 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:31:18.488559  800461 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:31:18.503305  800461 api_server.go:72] duration metric: took 5m47.996568848s to wait for apiserver process to appear ...
	I1209 11:31:18.503330  800461 api_server.go:88] waiting for apiserver healthz status ...
	I1209 11:31:18.503367  800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:31:18.503422  800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:31:18.586637  800461 cri.go:89] found id: "92b50938c8b97d22b97b90d7211713e83c2e408b0a5bfa70a5b801a11893dfe2"
	I1209 11:31:18.586658  800461 cri.go:89] found id: "8f5f2eca7e918147351695f83e907d55498a88bcb6c566f93410605398939265"
	I1209 11:31:18.586663  800461 cri.go:89] found id: ""
	I1209 11:31:18.586670  800461 logs.go:282] 2 containers: [92b50938c8b97d22b97b90d7211713e83c2e408b0a5bfa70a5b801a11893dfe2 8f5f2eca7e918147351695f83e907d55498a88bcb6c566f93410605398939265]
	I1209 11:31:18.586732  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:18.592662  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:18.597005  800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 11:31:18.597082  800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:31:18.650624  800461 cri.go:89] found id: "c841cccf0a5bdd1fe2fca556e5e36b880a686f41d9146ef7788f247bce61a9d5"
	I1209 11:31:18.650643  800461 cri.go:89] found id: "2b8b97c8ef833f12b7f54ccb1f4f9e43ee5e96f62f8bb9ee22ef50e58e3ea81f"
	I1209 11:31:18.650648  800461 cri.go:89] found id: ""
	I1209 11:31:18.650655  800461 logs.go:282] 2 containers: [c841cccf0a5bdd1fe2fca556e5e36b880a686f41d9146ef7788f247bce61a9d5 2b8b97c8ef833f12b7f54ccb1f4f9e43ee5e96f62f8bb9ee22ef50e58e3ea81f]
	I1209 11:31:18.650714  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:18.655082  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:18.659058  800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 11:31:18.659127  800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:31:18.716242  800461 cri.go:89] found id: "af856b017afebc2eff0a52c30b40b9ae7d18fbed9646a2cd62795c30eae0a468"
	I1209 11:31:18.716262  800461 cri.go:89] found id: "ed42b2a1e21f5bc2262274dce47ab17b0351c70af13f20f07756dca1fc28f478"
	I1209 11:31:18.716267  800461 cri.go:89] found id: ""
	I1209 11:31:18.716275  800461 logs.go:282] 2 containers: [af856b017afebc2eff0a52c30b40b9ae7d18fbed9646a2cd62795c30eae0a468 ed42b2a1e21f5bc2262274dce47ab17b0351c70af13f20f07756dca1fc28f478]
	I1209 11:31:18.716332  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:18.721120  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:18.725267  800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:31:18.725399  800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:31:18.784506  800461 cri.go:89] found id: "ae41a05fd4b1117afac3aa1f7764574fd0ed6427f9d72dc4841a9f2d948eeec3"
	I1209 11:31:18.784578  800461 cri.go:89] found id: "0a9c5fc2481f8b0b984d6634dcbb9354b762189dacc2253907afe96468b89d47"
	I1209 11:31:18.784586  800461 cri.go:89] found id: ""
	I1209 11:31:18.784593  800461 logs.go:282] 2 containers: [ae41a05fd4b1117afac3aa1f7764574fd0ed6427f9d72dc4841a9f2d948eeec3 0a9c5fc2481f8b0b984d6634dcbb9354b762189dacc2253907afe96468b89d47]
	I1209 11:31:18.784683  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:18.789471  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:18.793630  800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:31:18.793751  800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:31:18.875516  800461 cri.go:89] found id: "167b84f8f987c313cf51f0bcb5bd6488b6ad7b450cd789460a00b513af86bb98"
	I1209 11:31:18.875610  800461 cri.go:89] found id: "a9ed84f8cfb61c9caff73d1ed13931a69b4186cabc325c3cc1809699aa1be74e"
	I1209 11:31:18.875642  800461 cri.go:89] found id: ""
	I1209 11:31:18.875671  800461 logs.go:282] 2 containers: [167b84f8f987c313cf51f0bcb5bd6488b6ad7b450cd789460a00b513af86bb98 a9ed84f8cfb61c9caff73d1ed13931a69b4186cabc325c3cc1809699aa1be74e]
	I1209 11:31:18.875795  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:18.882901  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:18.891490  800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:31:18.891681  800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:31:19.133994  800461 cri.go:89] found id: "8afdcb5ba074eac7202e301ff2b67765f72a8dd4e47a0100cf89910df022caaa"
	I1209 11:31:19.134060  800461 cri.go:89] found id: "25fc7bce15ad22f8de3baaeb0bb7f687bb2b3dc94e08ab8eb20977713310736b"
	I1209 11:31:19.134086  800461 cri.go:89] found id: ""
	I1209 11:31:19.134106  800461 logs.go:282] 2 containers: [8afdcb5ba074eac7202e301ff2b67765f72a8dd4e47a0100cf89910df022caaa 25fc7bce15ad22f8de3baaeb0bb7f687bb2b3dc94e08ab8eb20977713310736b]
	I1209 11:31:19.134198  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:19.139026  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:19.143699  800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 11:31:19.143825  800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:31:19.193447  800461 cri.go:89] found id: "91cb2ed43dfec514bb35f6e6633f1b7bc69390d1eb0c98d7fd830339fc56e9e3"
	I1209 11:31:19.193537  800461 cri.go:89] found id: "eb174547e077389a60898a71aa5ab17ac0877a9aa4886acc78d7d9454ff6ec44"
	I1209 11:31:19.193560  800461 cri.go:89] found id: ""
	I1209 11:31:19.193579  800461 logs.go:282] 2 containers: [91cb2ed43dfec514bb35f6e6633f1b7bc69390d1eb0c98d7fd830339fc56e9e3 eb174547e077389a60898a71aa5ab17ac0877a9aa4886acc78d7d9454ff6ec44]
	I1209 11:31:19.193678  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:19.198061  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:19.202246  800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:31:19.202370  800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:31:19.262303  800461 cri.go:89] found id: "b5b12d4047e8983ca175dce7ad530195e42f6c97736f3423b56ba4b67eee4cfd"
	I1209 11:31:19.262375  800461 cri.go:89] found id: ""
	I1209 11:31:19.262400  800461 logs.go:282] 1 containers: [b5b12d4047e8983ca175dce7ad530195e42f6c97736f3423b56ba4b67eee4cfd]
	I1209 11:31:19.262483  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:19.266675  800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 11:31:19.266798  800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 11:31:19.321269  800461 cri.go:89] found id: "663485e6313972a7cfcbedd678323ecb2f884257c37073d5b2e988fad2ae5465"
	I1209 11:31:19.321342  800461 cri.go:89] found id: "1c864e51ed369180554c763558a548ef00ea435a92cc98a093c1261e29cf2bbf"
	I1209 11:31:19.321361  800461 cri.go:89] found id: ""
	I1209 11:31:19.321380  800461 logs.go:282] 2 containers: [663485e6313972a7cfcbedd678323ecb2f884257c37073d5b2e988fad2ae5465 1c864e51ed369180554c763558a548ef00ea435a92cc98a093c1261e29cf2bbf]
	I1209 11:31:19.321461  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:19.326755  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:19.331149  800461 logs.go:123] Gathering logs for kube-scheduler [0a9c5fc2481f8b0b984d6634dcbb9354b762189dacc2253907afe96468b89d47] ...
	I1209 11:31:19.331228  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a9c5fc2481f8b0b984d6634dcbb9354b762189dacc2253907afe96468b89d47"
	I1209 11:31:19.386882  800461 logs.go:123] Gathering logs for kindnet [91cb2ed43dfec514bb35f6e6633f1b7bc69390d1eb0c98d7fd830339fc56e9e3] ...
	I1209 11:31:19.386967  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91cb2ed43dfec514bb35f6e6633f1b7bc69390d1eb0c98d7fd830339fc56e9e3"
	I1209 11:31:19.452242  800461 logs.go:123] Gathering logs for containerd ...
	I1209 11:31:19.452325  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 11:31:19.525862  800461 logs.go:123] Gathering logs for dmesg ...
	I1209 11:31:19.525954  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:31:19.545007  800461 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:31:19.545090  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 11:31:19.724757  800461 logs.go:123] Gathering logs for kube-apiserver [92b50938c8b97d22b97b90d7211713e83c2e408b0a5bfa70a5b801a11893dfe2] ...
	I1209 11:31:19.724789  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92b50938c8b97d22b97b90d7211713e83c2e408b0a5bfa70a5b801a11893dfe2"
	I1209 11:31:19.826367  800461 logs.go:123] Gathering logs for etcd [2b8b97c8ef833f12b7f54ccb1f4f9e43ee5e96f62f8bb9ee22ef50e58e3ea81f] ...
	I1209 11:31:19.826408  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b8b97c8ef833f12b7f54ccb1f4f9e43ee5e96f62f8bb9ee22ef50e58e3ea81f"
	I1209 11:31:19.908141  800461 logs.go:123] Gathering logs for coredns [af856b017afebc2eff0a52c30b40b9ae7d18fbed9646a2cd62795c30eae0a468] ...
	I1209 11:31:19.908227  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af856b017afebc2eff0a52c30b40b9ae7d18fbed9646a2cd62795c30eae0a468"
	I1209 11:31:19.988919  800461 logs.go:123] Gathering logs for coredns [ed42b2a1e21f5bc2262274dce47ab17b0351c70af13f20f07756dca1fc28f478] ...
	I1209 11:31:19.988947  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed42b2a1e21f5bc2262274dce47ab17b0351c70af13f20f07756dca1fc28f478"
	I1209 11:31:20.059667  800461 logs.go:123] Gathering logs for kube-proxy [a9ed84f8cfb61c9caff73d1ed13931a69b4186cabc325c3cc1809699aa1be74e] ...
	I1209 11:31:20.059707  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9ed84f8cfb61c9caff73d1ed13931a69b4186cabc325c3cc1809699aa1be74e"
	I1209 11:31:20.122817  800461 logs.go:123] Gathering logs for kindnet [eb174547e077389a60898a71aa5ab17ac0877a9aa4886acc78d7d9454ff6ec44] ...
	I1209 11:31:20.122865  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb174547e077389a60898a71aa5ab17ac0877a9aa4886acc78d7d9454ff6ec44"
	I1209 11:31:20.176945  800461 logs.go:123] Gathering logs for kube-apiserver [8f5f2eca7e918147351695f83e907d55498a88bcb6c566f93410605398939265] ...
	I1209 11:31:20.176975  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f5f2eca7e918147351695f83e907d55498a88bcb6c566f93410605398939265"
	I1209 11:31:20.248500  800461 logs.go:123] Gathering logs for kube-scheduler [ae41a05fd4b1117afac3aa1f7764574fd0ed6427f9d72dc4841a9f2d948eeec3] ...
	I1209 11:31:20.248577  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae41a05fd4b1117afac3aa1f7764574fd0ed6427f9d72dc4841a9f2d948eeec3"
	I1209 11:31:20.296022  800461 logs.go:123] Gathering logs for kube-controller-manager [25fc7bce15ad22f8de3baaeb0bb7f687bb2b3dc94e08ab8eb20977713310736b] ...
	I1209 11:31:20.296050  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25fc7bce15ad22f8de3baaeb0bb7f687bb2b3dc94e08ab8eb20977713310736b"
	I1209 11:31:20.388485  800461 logs.go:123] Gathering logs for kubernetes-dashboard [b5b12d4047e8983ca175dce7ad530195e42f6c97736f3423b56ba4b67eee4cfd] ...
	I1209 11:31:20.388571  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5b12d4047e8983ca175dce7ad530195e42f6c97736f3423b56ba4b67eee4cfd"
	I1209 11:31:20.438386  800461 logs.go:123] Gathering logs for storage-provisioner [1c864e51ed369180554c763558a548ef00ea435a92cc98a093c1261e29cf2bbf] ...
	I1209 11:31:20.438415  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c864e51ed369180554c763558a548ef00ea435a92cc98a093c1261e29cf2bbf"
	I1209 11:31:20.479553  800461 logs.go:123] Gathering logs for container status ...
	I1209 11:31:20.479584  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:31:20.547014  800461 logs.go:123] Gathering logs for kubelet ...
	I1209 11:31:20.547045  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 11:31:20.602779  800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.528000     663 reflector.go:138] object-"kube-system"/"coredns-token-b78rj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-b78rj" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-623695' and this object
	W1209 11:31:20.603031  800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.528077     663 reflector.go:138] object-"kube-system"/"kindnet-token-nl827": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-nl827" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-623695' and this object
	W1209 11:31:20.603261  800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.532699     663 reflector.go:138] object-"kube-system"/"storage-provisioner-token-sw5w9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-sw5w9" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-623695' and this object
	W1209 11:31:20.603459  800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.532801     663 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-623695' and this object
	W1209 11:31:20.603665  800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.532864     663 reflector.go:138] object-"default"/"default-token-pgtqr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-pgtqr" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-623695' and this object
	W1209 11:31:20.603875  800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.532917     663 reflector.go:138] object-"kube-system"/"kube-proxy-token-tnwqj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-tnwqj" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-623695' and this object
	W1209 11:31:20.604167  800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.532965     663 reflector.go:138] object-"kube-system"/"metrics-server-token-hcpl8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-hcpl8" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-623695' and this object
	W1209 11:31:20.604377  800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.533017     663 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-623695' and this object
	W1209 11:31:20.614712  800461 logs.go:138] Found kubelet problem: Dec 09 11:25:51 old-k8s-version-623695 kubelet[663]: E1209 11:25:51.720038     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1209 11:31:20.614911  800461 logs.go:138] Found kubelet problem: Dec 09 11:25:51 old-k8s-version-623695 kubelet[663]: E1209 11:25:51.747865     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.617926  800461 logs.go:138] Found kubelet problem: Dec 09 11:26:03 old-k8s-version-623695 kubelet[663]: E1209 11:26:03.558736     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1209 11:31:20.620029  800461 logs.go:138] Found kubelet problem: Dec 09 11:26:14 old-k8s-version-623695 kubelet[663]: E1209 11:26:14.883179     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.620222  800461 logs.go:138] Found kubelet problem: Dec 09 11:26:15 old-k8s-version-623695 kubelet[663]: E1209 11:26:15.549936     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.620552  800461 logs.go:138] Found kubelet problem: Dec 09 11:26:15 old-k8s-version-623695 kubelet[663]: E1209 11:26:15.890882     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.621216  800461 logs.go:138] Found kubelet problem: Dec 09 11:26:19 old-k8s-version-623695 kubelet[663]: E1209 11:26:19.928539     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.621656  800461 logs.go:138] Found kubelet problem: Dec 09 11:26:21 old-k8s-version-623695 kubelet[663]: E1209 11:26:21.930997     663 pod_workers.go:191] Error syncing pod a4b9e510-c334-4949-a8ad-1f3f41854e03 ("storage-provisioner_kube-system(a4b9e510-c334-4949-a8ad-1f3f41854e03)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a4b9e510-c334-4949-a8ad-1f3f41854e03)"
	W1209 11:31:20.624090  800461 logs.go:138] Found kubelet problem: Dec 09 11:26:28 old-k8s-version-623695 kubelet[663]: E1209 11:26:28.556809     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1209 11:31:20.625177  800461 logs.go:138] Found kubelet problem: Dec 09 11:26:33 old-k8s-version-623695 kubelet[663]: E1209 11:26:33.968067     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.625506  800461 logs.go:138] Found kubelet problem: Dec 09 11:26:39 old-k8s-version-623695 kubelet[663]: E1209 11:26:39.927886     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.625700  800461 logs.go:138] Found kubelet problem: Dec 09 11:26:42 old-k8s-version-623695 kubelet[663]: E1209 11:26:42.551170     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.626029  800461 logs.go:138] Found kubelet problem: Dec 09 11:26:52 old-k8s-version-623695 kubelet[663]: E1209 11:26:52.546429     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.626212  800461 logs.go:138] Found kubelet problem: Dec 09 11:26:53 old-k8s-version-623695 kubelet[663]: E1209 11:26:53.546792     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.626396  800461 logs.go:138] Found kubelet problem: Dec 09 11:27:04 old-k8s-version-623695 kubelet[663]: E1209 11:27:04.547503     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.626985  800461 logs.go:138] Found kubelet problem: Dec 09 11:27:06 old-k8s-version-623695 kubelet[663]: E1209 11:27:06.138491     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.627309  800461 logs.go:138] Found kubelet problem: Dec 09 11:27:09 old-k8s-version-623695 kubelet[663]: E1209 11:27:09.927608     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.629742  800461 logs.go:138] Found kubelet problem: Dec 09 11:27:15 old-k8s-version-623695 kubelet[663]: E1209 11:27:15.558173     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1209 11:31:20.630068  800461 logs.go:138] Found kubelet problem: Dec 09 11:27:22 old-k8s-version-623695 kubelet[663]: E1209 11:27:22.550284     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.630252  800461 logs.go:138] Found kubelet problem: Dec 09 11:27:29 old-k8s-version-623695 kubelet[663]: E1209 11:27:29.559056     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.630574  800461 logs.go:138] Found kubelet problem: Dec 09 11:27:35 old-k8s-version-623695 kubelet[663]: E1209 11:27:35.546799     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.630756  800461 logs.go:138] Found kubelet problem: Dec 09 11:27:43 old-k8s-version-623695 kubelet[663]: E1209 11:27:43.546652     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.631337  800461 logs.go:138] Found kubelet problem: Dec 09 11:27:47 old-k8s-version-623695 kubelet[663]: E1209 11:27:47.281093     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.631667  800461 logs.go:138] Found kubelet problem: Dec 09 11:27:49 old-k8s-version-623695 kubelet[663]: E1209 11:27:49.927704     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.631851  800461 logs.go:138] Found kubelet problem: Dec 09 11:27:55 old-k8s-version-623695 kubelet[663]: E1209 11:27:55.546651     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.632178  800461 logs.go:138] Found kubelet problem: Dec 09 11:28:03 old-k8s-version-623695 kubelet[663]: E1209 11:28:03.546208     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.632359  800461 logs.go:138] Found kubelet problem: Dec 09 11:28:10 old-k8s-version-623695 kubelet[663]: E1209 11:28:10.546626     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.632692  800461 logs.go:138] Found kubelet problem: Dec 09 11:28:15 old-k8s-version-623695 kubelet[663]: E1209 11:28:15.546835     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.632876  800461 logs.go:138] Found kubelet problem: Dec 09 11:28:23 old-k8s-version-623695 kubelet[663]: E1209 11:28:23.546586     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.633206  800461 logs.go:138] Found kubelet problem: Dec 09 11:28:26 old-k8s-version-623695 kubelet[663]: E1209 11:28:26.546890     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.635619  800461 logs.go:138] Found kubelet problem: Dec 09 11:28:38 old-k8s-version-623695 kubelet[663]: E1209 11:28:38.563060     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1209 11:31:20.635943  800461 logs.go:138] Found kubelet problem: Dec 09 11:28:40 old-k8s-version-623695 kubelet[663]: E1209 11:28:40.546787     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.636130  800461 logs.go:138] Found kubelet problem: Dec 09 11:28:50 old-k8s-version-623695 kubelet[663]: E1209 11:28:50.546870     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.636454  800461 logs.go:138] Found kubelet problem: Dec 09 11:28:55 old-k8s-version-623695 kubelet[663]: E1209 11:28:55.546234     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.636636  800461 logs.go:138] Found kubelet problem: Dec 09 11:29:03 old-k8s-version-623695 kubelet[663]: E1209 11:29:03.546823     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.637224  800461 logs.go:138] Found kubelet problem: Dec 09 11:29:10 old-k8s-version-623695 kubelet[663]: E1209 11:29:10.508888     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.637408  800461 logs.go:138] Found kubelet problem: Dec 09 11:29:17 old-k8s-version-623695 kubelet[663]: E1209 11:29:17.546618     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.637739  800461 logs.go:138] Found kubelet problem: Dec 09 11:29:19 old-k8s-version-623695 kubelet[663]: E1209 11:29:19.928082     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.637921  800461 logs.go:138] Found kubelet problem: Dec 09 11:29:28 old-k8s-version-623695 kubelet[663]: E1209 11:29:28.547234     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.638245  800461 logs.go:138] Found kubelet problem: Dec 09 11:29:31 old-k8s-version-623695 kubelet[663]: E1209 11:29:31.546227     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.638429  800461 logs.go:138] Found kubelet problem: Dec 09 11:29:41 old-k8s-version-623695 kubelet[663]: E1209 11:29:41.546721     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.638756  800461 logs.go:138] Found kubelet problem: Dec 09 11:29:43 old-k8s-version-623695 kubelet[663]: E1209 11:29:43.546444     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.639078  800461 logs.go:138] Found kubelet problem: Dec 09 11:29:56 old-k8s-version-623695 kubelet[663]: E1209 11:29:56.547387     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.639264  800461 logs.go:138] Found kubelet problem: Dec 09 11:29:56 old-k8s-version-623695 kubelet[663]: E1209 11:29:56.548186     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.639608  800461 logs.go:138] Found kubelet problem: Dec 09 11:30:09 old-k8s-version-623695 kubelet[663]: E1209 11:30:09.546239     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.639791  800461 logs.go:138] Found kubelet problem: Dec 09 11:30:11 old-k8s-version-623695 kubelet[663]: E1209 11:30:11.546522     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.640114  800461 logs.go:138] Found kubelet problem: Dec 09 11:30:20 old-k8s-version-623695 kubelet[663]: E1209 11:30:20.547174     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.640296  800461 logs.go:138] Found kubelet problem: Dec 09 11:30:26 old-k8s-version-623695 kubelet[663]: E1209 11:30:26.547093     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.640619  800461 logs.go:138] Found kubelet problem: Dec 09 11:30:33 old-k8s-version-623695 kubelet[663]: E1209 11:30:33.546231     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.640802  800461 logs.go:138] Found kubelet problem: Dec 09 11:30:39 old-k8s-version-623695 kubelet[663]: E1209 11:30:39.546660     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.641125  800461 logs.go:138] Found kubelet problem: Dec 09 11:30:44 old-k8s-version-623695 kubelet[663]: E1209 11:30:44.548327     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.641326  800461 logs.go:138] Found kubelet problem: Dec 09 11:30:50 old-k8s-version-623695 kubelet[663]: E1209 11:30:50.546557     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.641653  800461 logs.go:138] Found kubelet problem: Dec 09 11:30:56 old-k8s-version-623695 kubelet[663]: E1209 11:30:56.546828     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.641835  800461 logs.go:138] Found kubelet problem: Dec 09 11:31:05 old-k8s-version-623695 kubelet[663]: E1209 11:31:05.546612     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.642158  800461 logs.go:138] Found kubelet problem: Dec 09 11:31:07 old-k8s-version-623695 kubelet[663]: E1209 11:31:07.547033     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.642482  800461 logs.go:138] Found kubelet problem: Dec 09 11:31:18 old-k8s-version-623695 kubelet[663]: E1209 11:31:18.546764     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.644887  800461 logs.go:138] Found kubelet problem: Dec 09 11:31:18 old-k8s-version-623695 kubelet[663]: E1209 11:31:18.588176     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	I1209 11:31:20.644899  800461 logs.go:123] Gathering logs for etcd [c841cccf0a5bdd1fe2fca556e5e36b880a686f41d9146ef7788f247bce61a9d5] ...
	I1209 11:31:20.644916  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c841cccf0a5bdd1fe2fca556e5e36b880a686f41d9146ef7788f247bce61a9d5"
	I1209 11:31:20.702944  800461 logs.go:123] Gathering logs for kube-proxy [167b84f8f987c313cf51f0bcb5bd6488b6ad7b450cd789460a00b513af86bb98] ...
	I1209 11:31:20.702973  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 167b84f8f987c313cf51f0bcb5bd6488b6ad7b450cd789460a00b513af86bb98"
	I1209 11:31:20.752172  800461 logs.go:123] Gathering logs for kube-controller-manager [8afdcb5ba074eac7202e301ff2b67765f72a8dd4e47a0100cf89910df022caaa] ...
	I1209 11:31:20.752205  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8afdcb5ba074eac7202e301ff2b67765f72a8dd4e47a0100cf89910df022caaa"
	I1209 11:31:20.826676  800461 logs.go:123] Gathering logs for storage-provisioner [663485e6313972a7cfcbedd678323ecb2f884257c37073d5b2e988fad2ae5465] ...
	I1209 11:31:20.826715  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 663485e6313972a7cfcbedd678323ecb2f884257c37073d5b2e988fad2ae5465"
	I1209 11:31:20.876353  800461 out.go:358] Setting ErrFile to fd 2...
	I1209 11:31:20.876379  800461 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 11:31:20.876431  800461 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1209 11:31:20.876458  800461 out.go:270]   Dec 09 11:30:56 old-k8s-version-623695 kubelet[663]: E1209 11:30:56.546828     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	  Dec 09 11:30:56 old-k8s-version-623695 kubelet[663]: E1209 11:30:56.546828     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.876476  800461 out.go:270]   Dec 09 11:31:05 old-k8s-version-623695 kubelet[663]: E1209 11:31:05.546612     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Dec 09 11:31:05 old-k8s-version-623695 kubelet[663]: E1209 11:31:05.546612     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.876492  800461 out.go:270]   Dec 09 11:31:07 old-k8s-version-623695 kubelet[663]: E1209 11:31:07.547033     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	  Dec 09 11:31:07 old-k8s-version-623695 kubelet[663]: E1209 11:31:07.547033     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.876499  800461 out.go:270]   Dec 09 11:31:18 old-k8s-version-623695 kubelet[663]: E1209 11:31:18.546764     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	  Dec 09 11:31:18 old-k8s-version-623695 kubelet[663]: E1209 11:31:18.546764     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.876505  800461 out.go:270]   Dec 09 11:31:18 old-k8s-version-623695 kubelet[663]: E1209 11:31:18.588176     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	  Dec 09 11:31:18 old-k8s-version-623695 kubelet[663]: E1209 11:31:18.588176     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	I1209 11:31:20.876532  800461 out.go:358] Setting ErrFile to fd 2...
	I1209 11:31:20.876539  800461 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:31:30.876747  800461 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1209 11:31:30.888585  800461 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1209 11:31:30.891227  800461 out.go:201] 
	W1209 11:31:30.893313  800461 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W1209 11:31:30.893353  800461 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W1209 11:31:30.893369  800461 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W1209 11:31:30.893375  800461 out.go:270] * 
	* 
	W1209 11:31:30.894594  800461 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 11:31:30.897267  800461 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-623695 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-623695
helpers_test.go:235: (dbg) docker inspect old-k8s-version-623695:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e35e17296ac8fd4561f5e7477248d51f221402d3e58f2cf3e27d2d85941d98c5",
	        "Created": "2024-12-09T11:22:11.587866445Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 800659,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-12-09T11:25:22.633046099Z",
	            "FinishedAt": "2024-12-09T11:25:21.459511628Z"
	        },
	        "Image": "sha256:51526bd7c0894c18bc1ef50650a0aaaea3bed24f70f72f77ac668ae72dfff137",
	        "ResolvConfPath": "/var/lib/docker/containers/e35e17296ac8fd4561f5e7477248d51f221402d3e58f2cf3e27d2d85941d98c5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e35e17296ac8fd4561f5e7477248d51f221402d3e58f2cf3e27d2d85941d98c5/hostname",
	        "HostsPath": "/var/lib/docker/containers/e35e17296ac8fd4561f5e7477248d51f221402d3e58f2cf3e27d2d85941d98c5/hosts",
	        "LogPath": "/var/lib/docker/containers/e35e17296ac8fd4561f5e7477248d51f221402d3e58f2cf3e27d2d85941d98c5/e35e17296ac8fd4561f5e7477248d51f221402d3e58f2cf3e27d2d85941d98c5-json.log",
	        "Name": "/old-k8s-version-623695",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-623695:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-623695",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5bca33ee280dba065127f98b38db67b119e9597628188ae34adc7a04adbbf9c1-init/diff:/var/lib/docker/overlay2/3061263481abb42050cdf79a3c56b934922c719b93d67b858ded630617e658c8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5bca33ee280dba065127f98b38db67b119e9597628188ae34adc7a04adbbf9c1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5bca33ee280dba065127f98b38db67b119e9597628188ae34adc7a04adbbf9c1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5bca33ee280dba065127f98b38db67b119e9597628188ae34adc7a04adbbf9c1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-623695",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-623695/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-623695",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-623695",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-623695",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7286a6b444dce53d5080d1c9ed89ae73a5e1e30dec021a0d11def1fb422c9b19",
	            "SandboxKey": "/var/run/docker/netns/7286a6b444dc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33802"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33803"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33806"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33804"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33805"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-623695": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null,
	                    "NetworkID": "f91d291d02499a47c4dfd84c18f2598dde0c3e4a5e25fd0978ece0d31c6395da",
	                    "EndpointID": "2d7ff9836cad68f7a978931db7c76df06d2519df4b92b6bfc0214b7e27909fa8",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-623695",
	                        "e35e17296ac8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-623695 -n old-k8s-version-623695
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-623695 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-623695 logs -n 25: (2.729826447s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-528742                              | cert-expiration-528742   | jenkins | v1.34.0 | 09 Dec 24 11:20 UTC | 09 Dec 24 11:21 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | force-systemd-env-377461                               | force-systemd-env-377461 | jenkins | v1.34.0 | 09 Dec 24 11:21 UTC | 09 Dec 24 11:21 UTC |
	|         | ssh cat                                                |                          |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                          |         |         |                     |                     |
	| delete  | -p force-systemd-env-377461                            | force-systemd-env-377461 | jenkins | v1.34.0 | 09 Dec 24 11:21 UTC | 09 Dec 24 11:21 UTC |
	| start   | -p cert-options-724611                                 | cert-options-724611      | jenkins | v1.34.0 | 09 Dec 24 11:21 UTC | 09 Dec 24 11:22 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                          |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                          |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                          |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                          |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | cert-options-724611 ssh                                | cert-options-724611      | jenkins | v1.34.0 | 09 Dec 24 11:22 UTC | 09 Dec 24 11:22 UTC |
	|         | openssl x509 -text -noout -in                          |                          |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                          |         |         |                     |                     |
	| ssh     | -p cert-options-724611 -- sudo                         | cert-options-724611      | jenkins | v1.34.0 | 09 Dec 24 11:22 UTC | 09 Dec 24 11:22 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                          |         |         |                     |                     |
	| delete  | -p cert-options-724611                                 | cert-options-724611      | jenkins | v1.34.0 | 09 Dec 24 11:22 UTC | 09 Dec 24 11:22 UTC |
	| start   | -p old-k8s-version-623695                              | old-k8s-version-623695   | jenkins | v1.34.0 | 09 Dec 24 11:22 UTC | 09 Dec 24 11:24 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| start   | -p cert-expiration-528742                              | cert-expiration-528742   | jenkins | v1.34.0 | 09 Dec 24 11:24 UTC | 09 Dec 24 11:24 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| delete  | -p cert-expiration-528742                              | cert-expiration-528742   | jenkins | v1.34.0 | 09 Dec 24 11:24 UTC | 09 Dec 24 11:24 UTC |
	| start   | -p no-preload-239649                                   | no-preload-239649        | jenkins | v1.34.0 | 09 Dec 24 11:24 UTC | 09 Dec 24 11:26 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-623695        | old-k8s-version-623695   | jenkins | v1.34.0 | 09 Dec 24 11:25 UTC | 09 Dec 24 11:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p old-k8s-version-623695                              | old-k8s-version-623695   | jenkins | v1.34.0 | 09 Dec 24 11:25 UTC | 09 Dec 24 11:25 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-623695             | old-k8s-version-623695   | jenkins | v1.34.0 | 09 Dec 24 11:25 UTC | 09 Dec 24 11:25 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p old-k8s-version-623695                              | old-k8s-version-623695   | jenkins | v1.34.0 | 09 Dec 24 11:25 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-239649             | no-preload-239649        | jenkins | v1.34.0 | 09 Dec 24 11:26 UTC | 09 Dec 24 11:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p no-preload-239649                                   | no-preload-239649        | jenkins | v1.34.0 | 09 Dec 24 11:26 UTC | 09 Dec 24 11:26 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-239649                  | no-preload-239649        | jenkins | v1.34.0 | 09 Dec 24 11:26 UTC | 09 Dec 24 11:26 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p no-preload-239649                                   | no-preload-239649        | jenkins | v1.34.0 | 09 Dec 24 11:26 UTC | 09 Dec 24 11:30 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                          |         |         |                     |                     |
	| image   | no-preload-239649 image list                           | no-preload-239649        | jenkins | v1.34.0 | 09 Dec 24 11:31 UTC | 09 Dec 24 11:31 UTC |
	|         | --format=json                                          |                          |         |         |                     |                     |
	| pause   | -p no-preload-239649                                   | no-preload-239649        | jenkins | v1.34.0 | 09 Dec 24 11:31 UTC | 09 Dec 24 11:31 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| unpause | -p no-preload-239649                                   | no-preload-239649        | jenkins | v1.34.0 | 09 Dec 24 11:31 UTC | 09 Dec 24 11:31 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| delete  | -p no-preload-239649                                   | no-preload-239649        | jenkins | v1.34.0 | 09 Dec 24 11:31 UTC | 09 Dec 24 11:31 UTC |
	| delete  | -p no-preload-239649                                   | no-preload-239649        | jenkins | v1.34.0 | 09 Dec 24 11:31 UTC | 09 Dec 24 11:31 UTC |
	| start   | -p embed-certs-545509                                  | embed-certs-545509       | jenkins | v1.34.0 | 09 Dec 24 11:31 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                          |         |         |                     |                     |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 11:31:15
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.2 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 11:31:15.573428  811348 out.go:345] Setting OutFile to fd 1 ...
	I1209 11:31:15.573598  811348 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:31:15.573609  811348 out.go:358] Setting ErrFile to fd 2...
	I1209 11:31:15.573614  811348 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:31:15.573990  811348 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-586689/.minikube/bin
	I1209 11:31:15.574528  811348 out.go:352] Setting JSON to false
	I1209 11:31:15.576393  811348 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":15223,"bootTime":1733728653,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1209 11:31:15.576518  811348 start.go:139] virtualization:  
	I1209 11:31:15.579470  811348 out.go:177] * [embed-certs-545509] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1209 11:31:15.581883  811348 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 11:31:15.581966  811348 notify.go:220] Checking for updates...
	I1209 11:31:15.583934  811348 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 11:31:15.586164  811348 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20068-586689/kubeconfig
	I1209 11:31:15.588317  811348 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-586689/.minikube
	I1209 11:31:15.590247  811348 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 11:31:15.592093  811348 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 11:31:15.594616  811348 config.go:182] Loaded profile config "old-k8s-version-623695": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1209 11:31:15.594778  811348 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 11:31:15.621007  811348 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1209 11:31:15.621133  811348 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 11:31:15.683949  811348 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-09 11:31:15.673895775 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1209 11:31:15.684077  811348 docker.go:318] overlay module found
	I1209 11:31:15.686476  811348 out.go:177] * Using the docker driver based on user configuration
	I1209 11:31:15.688687  811348 start.go:297] selected driver: docker
	I1209 11:31:15.688721  811348 start.go:901] validating driver "docker" against <nil>
	I1209 11:31:15.688736  811348 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 11:31:15.689637  811348 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 11:31:15.748631  811348 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-09 11:31:15.734044719 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1209 11:31:15.748867  811348 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 11:31:15.749124  811348 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 11:31:15.752097  811348 out.go:177] * Using Docker driver with root privileges
	I1209 11:31:15.754683  811348 cni.go:84] Creating CNI manager for ""
	I1209 11:31:15.754775  811348 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 11:31:15.754790  811348 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 11:31:15.754921  811348 start.go:340] cluster config:
	{Name:embed-certs-545509 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-545509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:31:15.757449  811348 out.go:177] * Starting "embed-certs-545509" primary control-plane node in "embed-certs-545509" cluster
	I1209 11:31:15.759458  811348 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1209 11:31:15.761848  811348 out.go:177] * Pulling base image v0.0.45-1730888964-19917 ...
	I1209 11:31:15.764070  811348 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
	I1209 11:31:15.764112  811348 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1209 11:31:15.764144  811348 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20068-586689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4
	I1209 11:31:15.764154  811348 cache.go:56] Caching tarball of preloaded images
	I1209 11:31:15.764238  811348 preload.go:172] Found /home/jenkins/minikube-integration/20068-586689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 11:31:15.764248  811348 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on containerd
	I1209 11:31:15.764353  811348 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/config.json ...
	I1209 11:31:15.764371  811348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/config.json: {Name:mkfdc7f72bbc29f4fa6ffde9e5c99fe240224f1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:31:15.785432  811348 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon, skipping pull
	I1209 11:31:15.785458  811348 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in daemon, skipping load
	I1209 11:31:15.785478  811348 cache.go:194] Successfully downloaded all kic artifacts
	I1209 11:31:15.785503  811348 start.go:360] acquireMachinesLock for embed-certs-545509: {Name:mk66bd73395460001b6da093a04d1bc9ddd88855 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 11:31:15.786280  811348 start.go:364] duration metric: took 716.878µs to acquireMachinesLock for "embed-certs-545509"
	I1209 11:31:15.786322  811348 start.go:93] Provisioning new machine with config: &{Name:embed-certs-545509 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-545509 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1209 11:31:15.786421  811348 start.go:125] createHost starting for "" (driver="docker")
	I1209 11:31:15.789913  811348 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1209 11:31:15.790235  811348 start.go:159] libmachine.API.Create for "embed-certs-545509" (driver="docker")
	I1209 11:31:15.790274  811348 client.go:168] LocalClient.Create starting
	I1209 11:31:15.790350  811348 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20068-586689/.minikube/certs/ca.pem
	I1209 11:31:15.790394  811348 main.go:141] libmachine: Decoding PEM data...
	I1209 11:31:15.790407  811348 main.go:141] libmachine: Parsing certificate...
	I1209 11:31:15.790462  811348 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20068-586689/.minikube/certs/cert.pem
	I1209 11:31:15.790485  811348 main.go:141] libmachine: Decoding PEM data...
	I1209 11:31:15.790498  811348 main.go:141] libmachine: Parsing certificate...
	I1209 11:31:15.790880  811348 cli_runner.go:164] Run: docker network inspect embed-certs-545509 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1209 11:31:15.807890  811348 cli_runner.go:211] docker network inspect embed-certs-545509 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1209 11:31:15.807973  811348 network_create.go:284] running [docker network inspect embed-certs-545509] to gather additional debugging logs...
	I1209 11:31:15.807994  811348 cli_runner.go:164] Run: docker network inspect embed-certs-545509
	W1209 11:31:15.837264  811348 cli_runner.go:211] docker network inspect embed-certs-545509 returned with exit code 1
	I1209 11:31:15.837294  811348 network_create.go:287] error running [docker network inspect embed-certs-545509]: docker network inspect embed-certs-545509: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-545509 not found
	I1209 11:31:15.837309  811348 network_create.go:289] output of [docker network inspect embed-certs-545509]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-545509 not found
	
	** /stderr **
	I1209 11:31:15.837419  811348 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 11:31:15.854075  811348 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3f46af3becfa IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:b8:cb:97:e7} reservation:<nil>}
	I1209 11:31:15.854807  811348 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2b0b14c10880 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:64:32:77:d9} reservation:<nil>}
	I1209 11:31:15.855455  811348 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-dc4622f79210 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:ff:ac:62:29} reservation:<nil>}
	I1209 11:31:15.856120  811348 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a02c60}
	I1209 11:31:15.856167  811348 network_create.go:124] attempt to create docker network embed-certs-545509 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1209 11:31:15.856233  811348 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-545509 embed-certs-545509
	I1209 11:31:15.940127  811348 network_create.go:108] docker network embed-certs-545509 192.168.76.0/24 created
	I1209 11:31:15.940158  811348 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-545509" container
	I1209 11:31:15.940230  811348 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1209 11:31:15.955717  811348 cli_runner.go:164] Run: docker volume create embed-certs-545509 --label name.minikube.sigs.k8s.io=embed-certs-545509 --label created_by.minikube.sigs.k8s.io=true
	I1209 11:31:15.971905  811348 oci.go:103] Successfully created a docker volume embed-certs-545509
	I1209 11:31:15.971993  811348 cli_runner.go:164] Run: docker run --rm --name embed-certs-545509-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-545509 --entrypoint /usr/bin/test -v embed-certs-545509:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -d /var/lib
	I1209 11:31:16.652875  811348 oci.go:107] Successfully prepared a docker volume embed-certs-545509
	I1209 11:31:16.652926  811348 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
	I1209 11:31:16.652948  811348 kic.go:194] Starting extracting preloaded images to volume ...
	I1209 11:31:16.653022  811348 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20068-586689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-545509:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -I lz4 -xf /preloaded.tar -C /extractDir
	I1209 11:31:18.488559  800461 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:31:18.503305  800461 api_server.go:72] duration metric: took 5m47.996568848s to wait for apiserver process to appear ...
	I1209 11:31:18.503330  800461 api_server.go:88] waiting for apiserver healthz status ...
	I1209 11:31:18.503367  800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:31:18.503422  800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:31:18.586637  800461 cri.go:89] found id: "92b50938c8b97d22b97b90d7211713e83c2e408b0a5bfa70a5b801a11893dfe2"
	I1209 11:31:18.586658  800461 cri.go:89] found id: "8f5f2eca7e918147351695f83e907d55498a88bcb6c566f93410605398939265"
	I1209 11:31:18.586663  800461 cri.go:89] found id: ""
	I1209 11:31:18.586670  800461 logs.go:282] 2 containers: [92b50938c8b97d22b97b90d7211713e83c2e408b0a5bfa70a5b801a11893dfe2 8f5f2eca7e918147351695f83e907d55498a88bcb6c566f93410605398939265]
	I1209 11:31:18.586732  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:18.592662  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:18.597005  800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 11:31:18.597082  800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:31:18.650624  800461 cri.go:89] found id: "c841cccf0a5bdd1fe2fca556e5e36b880a686f41d9146ef7788f247bce61a9d5"
	I1209 11:31:18.650643  800461 cri.go:89] found id: "2b8b97c8ef833f12b7f54ccb1f4f9e43ee5e96f62f8bb9ee22ef50e58e3ea81f"
	I1209 11:31:18.650648  800461 cri.go:89] found id: ""
	I1209 11:31:18.650655  800461 logs.go:282] 2 containers: [c841cccf0a5bdd1fe2fca556e5e36b880a686f41d9146ef7788f247bce61a9d5 2b8b97c8ef833f12b7f54ccb1f4f9e43ee5e96f62f8bb9ee22ef50e58e3ea81f]
	I1209 11:31:18.650714  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:18.655082  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:18.659058  800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 11:31:18.659127  800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:31:18.716242  800461 cri.go:89] found id: "af856b017afebc2eff0a52c30b40b9ae7d18fbed9646a2cd62795c30eae0a468"
	I1209 11:31:18.716262  800461 cri.go:89] found id: "ed42b2a1e21f5bc2262274dce47ab17b0351c70af13f20f07756dca1fc28f478"
	I1209 11:31:18.716267  800461 cri.go:89] found id: ""
	I1209 11:31:18.716275  800461 logs.go:282] 2 containers: [af856b017afebc2eff0a52c30b40b9ae7d18fbed9646a2cd62795c30eae0a468 ed42b2a1e21f5bc2262274dce47ab17b0351c70af13f20f07756dca1fc28f478]
	I1209 11:31:18.716332  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:18.721120  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:18.725267  800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:31:18.725399  800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:31:18.784506  800461 cri.go:89] found id: "ae41a05fd4b1117afac3aa1f7764574fd0ed6427f9d72dc4841a9f2d948eeec3"
	I1209 11:31:18.784578  800461 cri.go:89] found id: "0a9c5fc2481f8b0b984d6634dcbb9354b762189dacc2253907afe96468b89d47"
	I1209 11:31:18.784586  800461 cri.go:89] found id: ""
	I1209 11:31:18.784593  800461 logs.go:282] 2 containers: [ae41a05fd4b1117afac3aa1f7764574fd0ed6427f9d72dc4841a9f2d948eeec3 0a9c5fc2481f8b0b984d6634dcbb9354b762189dacc2253907afe96468b89d47]
	I1209 11:31:18.784683  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:18.789471  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:18.793630  800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:31:18.793751  800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:31:18.875516  800461 cri.go:89] found id: "167b84f8f987c313cf51f0bcb5bd6488b6ad7b450cd789460a00b513af86bb98"
	I1209 11:31:18.875610  800461 cri.go:89] found id: "a9ed84f8cfb61c9caff73d1ed13931a69b4186cabc325c3cc1809699aa1be74e"
	I1209 11:31:18.875642  800461 cri.go:89] found id: ""
	I1209 11:31:18.875671  800461 logs.go:282] 2 containers: [167b84f8f987c313cf51f0bcb5bd6488b6ad7b450cd789460a00b513af86bb98 a9ed84f8cfb61c9caff73d1ed13931a69b4186cabc325c3cc1809699aa1be74e]
	I1209 11:31:18.875795  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:18.882901  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:18.891490  800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:31:18.891681  800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:31:19.133994  800461 cri.go:89] found id: "8afdcb5ba074eac7202e301ff2b67765f72a8dd4e47a0100cf89910df022caaa"
	I1209 11:31:19.134060  800461 cri.go:89] found id: "25fc7bce15ad22f8de3baaeb0bb7f687bb2b3dc94e08ab8eb20977713310736b"
	I1209 11:31:19.134086  800461 cri.go:89] found id: ""
	I1209 11:31:19.134106  800461 logs.go:282] 2 containers: [8afdcb5ba074eac7202e301ff2b67765f72a8dd4e47a0100cf89910df022caaa 25fc7bce15ad22f8de3baaeb0bb7f687bb2b3dc94e08ab8eb20977713310736b]
	I1209 11:31:19.134198  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:19.139026  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:19.143699  800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 11:31:19.143825  800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:31:19.193447  800461 cri.go:89] found id: "91cb2ed43dfec514bb35f6e6633f1b7bc69390d1eb0c98d7fd830339fc56e9e3"
	I1209 11:31:19.193537  800461 cri.go:89] found id: "eb174547e077389a60898a71aa5ab17ac0877a9aa4886acc78d7d9454ff6ec44"
	I1209 11:31:19.193560  800461 cri.go:89] found id: ""
	I1209 11:31:19.193579  800461 logs.go:282] 2 containers: [91cb2ed43dfec514bb35f6e6633f1b7bc69390d1eb0c98d7fd830339fc56e9e3 eb174547e077389a60898a71aa5ab17ac0877a9aa4886acc78d7d9454ff6ec44]
	I1209 11:31:19.193678  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:19.198061  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:19.202246  800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:31:19.202370  800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:31:19.262303  800461 cri.go:89] found id: "b5b12d4047e8983ca175dce7ad530195e42f6c97736f3423b56ba4b67eee4cfd"
	I1209 11:31:19.262375  800461 cri.go:89] found id: ""
	I1209 11:31:19.262400  800461 logs.go:282] 1 containers: [b5b12d4047e8983ca175dce7ad530195e42f6c97736f3423b56ba4b67eee4cfd]
	I1209 11:31:19.262483  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:19.266675  800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 11:31:19.266798  800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 11:31:19.321269  800461 cri.go:89] found id: "663485e6313972a7cfcbedd678323ecb2f884257c37073d5b2e988fad2ae5465"
	I1209 11:31:19.321342  800461 cri.go:89] found id: "1c864e51ed369180554c763558a548ef00ea435a92cc98a093c1261e29cf2bbf"
	I1209 11:31:19.321361  800461 cri.go:89] found id: ""
	I1209 11:31:19.321380  800461 logs.go:282] 2 containers: [663485e6313972a7cfcbedd678323ecb2f884257c37073d5b2e988fad2ae5465 1c864e51ed369180554c763558a548ef00ea435a92cc98a093c1261e29cf2bbf]
	I1209 11:31:19.321461  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:19.326755  800461 ssh_runner.go:195] Run: which crictl
	I1209 11:31:19.331149  800461 logs.go:123] Gathering logs for kube-scheduler [0a9c5fc2481f8b0b984d6634dcbb9354b762189dacc2253907afe96468b89d47] ...
	I1209 11:31:19.331228  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a9c5fc2481f8b0b984d6634dcbb9354b762189dacc2253907afe96468b89d47"
	I1209 11:31:19.386882  800461 logs.go:123] Gathering logs for kindnet [91cb2ed43dfec514bb35f6e6633f1b7bc69390d1eb0c98d7fd830339fc56e9e3] ...
	I1209 11:31:19.386967  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91cb2ed43dfec514bb35f6e6633f1b7bc69390d1eb0c98d7fd830339fc56e9e3"
	I1209 11:31:19.452242  800461 logs.go:123] Gathering logs for containerd ...
	I1209 11:31:19.452325  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 11:31:19.525862  800461 logs.go:123] Gathering logs for dmesg ...
	I1209 11:31:19.525954  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:31:19.545007  800461 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:31:19.545090  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 11:31:19.724757  800461 logs.go:123] Gathering logs for kube-apiserver [92b50938c8b97d22b97b90d7211713e83c2e408b0a5bfa70a5b801a11893dfe2] ...
	I1209 11:31:19.724789  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92b50938c8b97d22b97b90d7211713e83c2e408b0a5bfa70a5b801a11893dfe2"
	I1209 11:31:19.826367  800461 logs.go:123] Gathering logs for etcd [2b8b97c8ef833f12b7f54ccb1f4f9e43ee5e96f62f8bb9ee22ef50e58e3ea81f] ...
	I1209 11:31:19.826408  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b8b97c8ef833f12b7f54ccb1f4f9e43ee5e96f62f8bb9ee22ef50e58e3ea81f"
	I1209 11:31:19.908141  800461 logs.go:123] Gathering logs for coredns [af856b017afebc2eff0a52c30b40b9ae7d18fbed9646a2cd62795c30eae0a468] ...
	I1209 11:31:19.908227  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af856b017afebc2eff0a52c30b40b9ae7d18fbed9646a2cd62795c30eae0a468"
	I1209 11:31:19.988919  800461 logs.go:123] Gathering logs for coredns [ed42b2a1e21f5bc2262274dce47ab17b0351c70af13f20f07756dca1fc28f478] ...
	I1209 11:31:19.988947  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed42b2a1e21f5bc2262274dce47ab17b0351c70af13f20f07756dca1fc28f478"
	I1209 11:31:20.059667  800461 logs.go:123] Gathering logs for kube-proxy [a9ed84f8cfb61c9caff73d1ed13931a69b4186cabc325c3cc1809699aa1be74e] ...
	I1209 11:31:20.059707  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9ed84f8cfb61c9caff73d1ed13931a69b4186cabc325c3cc1809699aa1be74e"
	I1209 11:31:20.122817  800461 logs.go:123] Gathering logs for kindnet [eb174547e077389a60898a71aa5ab17ac0877a9aa4886acc78d7d9454ff6ec44] ...
	I1209 11:31:20.122865  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb174547e077389a60898a71aa5ab17ac0877a9aa4886acc78d7d9454ff6ec44"
	I1209 11:31:20.176945  800461 logs.go:123] Gathering logs for kube-apiserver [8f5f2eca7e918147351695f83e907d55498a88bcb6c566f93410605398939265] ...
	I1209 11:31:20.176975  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f5f2eca7e918147351695f83e907d55498a88bcb6c566f93410605398939265"
	I1209 11:31:20.248500  800461 logs.go:123] Gathering logs for kube-scheduler [ae41a05fd4b1117afac3aa1f7764574fd0ed6427f9d72dc4841a9f2d948eeec3] ...
	I1209 11:31:20.248577  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae41a05fd4b1117afac3aa1f7764574fd0ed6427f9d72dc4841a9f2d948eeec3"
	I1209 11:31:20.296022  800461 logs.go:123] Gathering logs for kube-controller-manager [25fc7bce15ad22f8de3baaeb0bb7f687bb2b3dc94e08ab8eb20977713310736b] ...
	I1209 11:31:20.296050  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25fc7bce15ad22f8de3baaeb0bb7f687bb2b3dc94e08ab8eb20977713310736b"
	I1209 11:31:20.388485  800461 logs.go:123] Gathering logs for kubernetes-dashboard [b5b12d4047e8983ca175dce7ad530195e42f6c97736f3423b56ba4b67eee4cfd] ...
	I1209 11:31:20.388571  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5b12d4047e8983ca175dce7ad530195e42f6c97736f3423b56ba4b67eee4cfd"
	I1209 11:31:20.438386  800461 logs.go:123] Gathering logs for storage-provisioner [1c864e51ed369180554c763558a548ef00ea435a92cc98a093c1261e29cf2bbf] ...
	I1209 11:31:20.438415  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c864e51ed369180554c763558a548ef00ea435a92cc98a093c1261e29cf2bbf"
	I1209 11:31:20.479553  800461 logs.go:123] Gathering logs for container status ...
	I1209 11:31:20.479584  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:31:20.547014  800461 logs.go:123] Gathering logs for kubelet ...
	I1209 11:31:20.547045  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 11:31:20.602779  800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.528000     663 reflector.go:138] object-"kube-system"/"coredns-token-b78rj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-b78rj" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-623695' and this object
	W1209 11:31:20.603031  800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.528077     663 reflector.go:138] object-"kube-system"/"kindnet-token-nl827": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-nl827" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-623695' and this object
	W1209 11:31:20.603261  800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.532699     663 reflector.go:138] object-"kube-system"/"storage-provisioner-token-sw5w9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-sw5w9" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-623695' and this object
	W1209 11:31:20.603459  800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.532801     663 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-623695' and this object
	W1209 11:31:20.603665  800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.532864     663 reflector.go:138] object-"default"/"default-token-pgtqr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-pgtqr" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-623695' and this object
	W1209 11:31:20.603875  800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.532917     663 reflector.go:138] object-"kube-system"/"kube-proxy-token-tnwqj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-tnwqj" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-623695' and this object
	W1209 11:31:20.604167  800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.532965     663 reflector.go:138] object-"kube-system"/"metrics-server-token-hcpl8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-hcpl8" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-623695' and this object
	W1209 11:31:20.604377  800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.533017     663 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-623695' and this object
	W1209 11:31:20.614712  800461 logs.go:138] Found kubelet problem: Dec 09 11:25:51 old-k8s-version-623695 kubelet[663]: E1209 11:25:51.720038     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1209 11:31:20.614911  800461 logs.go:138] Found kubelet problem: Dec 09 11:25:51 old-k8s-version-623695 kubelet[663]: E1209 11:25:51.747865     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.617926  800461 logs.go:138] Found kubelet problem: Dec 09 11:26:03 old-k8s-version-623695 kubelet[663]: E1209 11:26:03.558736     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1209 11:31:20.620029  800461 logs.go:138] Found kubelet problem: Dec 09 11:26:14 old-k8s-version-623695 kubelet[663]: E1209 11:26:14.883179     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.620222  800461 logs.go:138] Found kubelet problem: Dec 09 11:26:15 old-k8s-version-623695 kubelet[663]: E1209 11:26:15.549936     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.620552  800461 logs.go:138] Found kubelet problem: Dec 09 11:26:15 old-k8s-version-623695 kubelet[663]: E1209 11:26:15.890882     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.621216  800461 logs.go:138] Found kubelet problem: Dec 09 11:26:19 old-k8s-version-623695 kubelet[663]: E1209 11:26:19.928539     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.621656  800461 logs.go:138] Found kubelet problem: Dec 09 11:26:21 old-k8s-version-623695 kubelet[663]: E1209 11:26:21.930997     663 pod_workers.go:191] Error syncing pod a4b9e510-c334-4949-a8ad-1f3f41854e03 ("storage-provisioner_kube-system(a4b9e510-c334-4949-a8ad-1f3f41854e03)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a4b9e510-c334-4949-a8ad-1f3f41854e03)"
	W1209 11:31:20.624090  800461 logs.go:138] Found kubelet problem: Dec 09 11:26:28 old-k8s-version-623695 kubelet[663]: E1209 11:26:28.556809     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1209 11:31:20.625177  800461 logs.go:138] Found kubelet problem: Dec 09 11:26:33 old-k8s-version-623695 kubelet[663]: E1209 11:26:33.968067     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.625506  800461 logs.go:138] Found kubelet problem: Dec 09 11:26:39 old-k8s-version-623695 kubelet[663]: E1209 11:26:39.927886     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.625700  800461 logs.go:138] Found kubelet problem: Dec 09 11:26:42 old-k8s-version-623695 kubelet[663]: E1209 11:26:42.551170     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.626029  800461 logs.go:138] Found kubelet problem: Dec 09 11:26:52 old-k8s-version-623695 kubelet[663]: E1209 11:26:52.546429     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.626212  800461 logs.go:138] Found kubelet problem: Dec 09 11:26:53 old-k8s-version-623695 kubelet[663]: E1209 11:26:53.546792     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.626396  800461 logs.go:138] Found kubelet problem: Dec 09 11:27:04 old-k8s-version-623695 kubelet[663]: E1209 11:27:04.547503     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.626985  800461 logs.go:138] Found kubelet problem: Dec 09 11:27:06 old-k8s-version-623695 kubelet[663]: E1209 11:27:06.138491     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.627309  800461 logs.go:138] Found kubelet problem: Dec 09 11:27:09 old-k8s-version-623695 kubelet[663]: E1209 11:27:09.927608     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.629742  800461 logs.go:138] Found kubelet problem: Dec 09 11:27:15 old-k8s-version-623695 kubelet[663]: E1209 11:27:15.558173     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1209 11:31:20.630068  800461 logs.go:138] Found kubelet problem: Dec 09 11:27:22 old-k8s-version-623695 kubelet[663]: E1209 11:27:22.550284     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.630252  800461 logs.go:138] Found kubelet problem: Dec 09 11:27:29 old-k8s-version-623695 kubelet[663]: E1209 11:27:29.559056     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.630574  800461 logs.go:138] Found kubelet problem: Dec 09 11:27:35 old-k8s-version-623695 kubelet[663]: E1209 11:27:35.546799     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.630756  800461 logs.go:138] Found kubelet problem: Dec 09 11:27:43 old-k8s-version-623695 kubelet[663]: E1209 11:27:43.546652     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.631337  800461 logs.go:138] Found kubelet problem: Dec 09 11:27:47 old-k8s-version-623695 kubelet[663]: E1209 11:27:47.281093     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.631667  800461 logs.go:138] Found kubelet problem: Dec 09 11:27:49 old-k8s-version-623695 kubelet[663]: E1209 11:27:49.927704     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.631851  800461 logs.go:138] Found kubelet problem: Dec 09 11:27:55 old-k8s-version-623695 kubelet[663]: E1209 11:27:55.546651     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.632178  800461 logs.go:138] Found kubelet problem: Dec 09 11:28:03 old-k8s-version-623695 kubelet[663]: E1209 11:28:03.546208     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.632359  800461 logs.go:138] Found kubelet problem: Dec 09 11:28:10 old-k8s-version-623695 kubelet[663]: E1209 11:28:10.546626     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.632692  800461 logs.go:138] Found kubelet problem: Dec 09 11:28:15 old-k8s-version-623695 kubelet[663]: E1209 11:28:15.546835     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.632876  800461 logs.go:138] Found kubelet problem: Dec 09 11:28:23 old-k8s-version-623695 kubelet[663]: E1209 11:28:23.546586     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.633206  800461 logs.go:138] Found kubelet problem: Dec 09 11:28:26 old-k8s-version-623695 kubelet[663]: E1209 11:28:26.546890     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.635619  800461 logs.go:138] Found kubelet problem: Dec 09 11:28:38 old-k8s-version-623695 kubelet[663]: E1209 11:28:38.563060     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1209 11:31:20.635943  800461 logs.go:138] Found kubelet problem: Dec 09 11:28:40 old-k8s-version-623695 kubelet[663]: E1209 11:28:40.546787     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.636130  800461 logs.go:138] Found kubelet problem: Dec 09 11:28:50 old-k8s-version-623695 kubelet[663]: E1209 11:28:50.546870     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.636454  800461 logs.go:138] Found kubelet problem: Dec 09 11:28:55 old-k8s-version-623695 kubelet[663]: E1209 11:28:55.546234     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.636636  800461 logs.go:138] Found kubelet problem: Dec 09 11:29:03 old-k8s-version-623695 kubelet[663]: E1209 11:29:03.546823     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.637224  800461 logs.go:138] Found kubelet problem: Dec 09 11:29:10 old-k8s-version-623695 kubelet[663]: E1209 11:29:10.508888     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.637408  800461 logs.go:138] Found kubelet problem: Dec 09 11:29:17 old-k8s-version-623695 kubelet[663]: E1209 11:29:17.546618     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.637739  800461 logs.go:138] Found kubelet problem: Dec 09 11:29:19 old-k8s-version-623695 kubelet[663]: E1209 11:29:19.928082     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.637921  800461 logs.go:138] Found kubelet problem: Dec 09 11:29:28 old-k8s-version-623695 kubelet[663]: E1209 11:29:28.547234     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.638245  800461 logs.go:138] Found kubelet problem: Dec 09 11:29:31 old-k8s-version-623695 kubelet[663]: E1209 11:29:31.546227     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.638429  800461 logs.go:138] Found kubelet problem: Dec 09 11:29:41 old-k8s-version-623695 kubelet[663]: E1209 11:29:41.546721     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.638756  800461 logs.go:138] Found kubelet problem: Dec 09 11:29:43 old-k8s-version-623695 kubelet[663]: E1209 11:29:43.546444     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.639078  800461 logs.go:138] Found kubelet problem: Dec 09 11:29:56 old-k8s-version-623695 kubelet[663]: E1209 11:29:56.547387     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.639264  800461 logs.go:138] Found kubelet problem: Dec 09 11:29:56 old-k8s-version-623695 kubelet[663]: E1209 11:29:56.548186     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.639608  800461 logs.go:138] Found kubelet problem: Dec 09 11:30:09 old-k8s-version-623695 kubelet[663]: E1209 11:30:09.546239     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.639791  800461 logs.go:138] Found kubelet problem: Dec 09 11:30:11 old-k8s-version-623695 kubelet[663]: E1209 11:30:11.546522     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.640114  800461 logs.go:138] Found kubelet problem: Dec 09 11:30:20 old-k8s-version-623695 kubelet[663]: E1209 11:30:20.547174     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.640296  800461 logs.go:138] Found kubelet problem: Dec 09 11:30:26 old-k8s-version-623695 kubelet[663]: E1209 11:30:26.547093     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.640619  800461 logs.go:138] Found kubelet problem: Dec 09 11:30:33 old-k8s-version-623695 kubelet[663]: E1209 11:30:33.546231     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.640802  800461 logs.go:138] Found kubelet problem: Dec 09 11:30:39 old-k8s-version-623695 kubelet[663]: E1209 11:30:39.546660     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.641125  800461 logs.go:138] Found kubelet problem: Dec 09 11:30:44 old-k8s-version-623695 kubelet[663]: E1209 11:30:44.548327     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.641326  800461 logs.go:138] Found kubelet problem: Dec 09 11:30:50 old-k8s-version-623695 kubelet[663]: E1209 11:30:50.546557     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.641653  800461 logs.go:138] Found kubelet problem: Dec 09 11:30:56 old-k8s-version-623695 kubelet[663]: E1209 11:30:56.546828     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.641835  800461 logs.go:138] Found kubelet problem: Dec 09 11:31:05 old-k8s-version-623695 kubelet[663]: E1209 11:31:05.546612     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.642158  800461 logs.go:138] Found kubelet problem: Dec 09 11:31:07 old-k8s-version-623695 kubelet[663]: E1209 11:31:07.547033     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.642482  800461 logs.go:138] Found kubelet problem: Dec 09 11:31:18 old-k8s-version-623695 kubelet[663]: E1209 11:31:18.546764     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.644887  800461 logs.go:138] Found kubelet problem: Dec 09 11:31:18 old-k8s-version-623695 kubelet[663]: E1209 11:31:18.588176     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	I1209 11:31:20.644899  800461 logs.go:123] Gathering logs for etcd [c841cccf0a5bdd1fe2fca556e5e36b880a686f41d9146ef7788f247bce61a9d5] ...
	I1209 11:31:20.644916  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c841cccf0a5bdd1fe2fca556e5e36b880a686f41d9146ef7788f247bce61a9d5"
	I1209 11:31:20.702944  800461 logs.go:123] Gathering logs for kube-proxy [167b84f8f987c313cf51f0bcb5bd6488b6ad7b450cd789460a00b513af86bb98] ...
	I1209 11:31:20.702973  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 167b84f8f987c313cf51f0bcb5bd6488b6ad7b450cd789460a00b513af86bb98"
	I1209 11:31:20.752172  800461 logs.go:123] Gathering logs for kube-controller-manager [8afdcb5ba074eac7202e301ff2b67765f72a8dd4e47a0100cf89910df022caaa] ...
	I1209 11:31:20.752205  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8afdcb5ba074eac7202e301ff2b67765f72a8dd4e47a0100cf89910df022caaa"
	I1209 11:31:20.826676  800461 logs.go:123] Gathering logs for storage-provisioner [663485e6313972a7cfcbedd678323ecb2f884257c37073d5b2e988fad2ae5465] ...
	I1209 11:31:20.826715  800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 663485e6313972a7cfcbedd678323ecb2f884257c37073d5b2e988fad2ae5465"
	I1209 11:31:20.876353  800461 out.go:358] Setting ErrFile to fd 2...
	I1209 11:31:20.876379  800461 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 11:31:20.876431  800461 out.go:270] X Problems detected in kubelet:
	W1209 11:31:20.876458  800461 out.go:270]   Dec 09 11:30:56 old-k8s-version-623695 kubelet[663]: E1209 11:30:56.546828     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.876476  800461 out.go:270]   Dec 09 11:31:05 old-k8s-version-623695 kubelet[663]: E1209 11:31:05.546612     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 11:31:20.876492  800461 out.go:270]   Dec 09 11:31:07 old-k8s-version-623695 kubelet[663]: E1209 11:31:07.547033     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.876499  800461 out.go:270]   Dec 09 11:31:18 old-k8s-version-623695 kubelet[663]: E1209 11:31:18.546764     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	W1209 11:31:20.876505  800461 out.go:270]   Dec 09 11:31:18 old-k8s-version-623695 kubelet[663]: E1209 11:31:18.588176     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	I1209 11:31:20.876532  800461 out.go:358] Setting ErrFile to fd 2...
	I1209 11:31:20.876539  800461 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:31:21.323236  811348 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20068-586689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-545509:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -I lz4 -xf /preloaded.tar -C /extractDir: (4.670168419s)
	I1209 11:31:21.323267  811348 kic.go:203] duration metric: took 4.670315958s to extract preloaded images to volume ...
	W1209 11:31:21.323430  811348 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1209 11:31:21.323543  811348 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1209 11:31:21.380950  811348 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-545509 --name embed-certs-545509 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-545509 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-545509 --network embed-certs-545509 --ip 192.168.76.2 --volume embed-certs-545509:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615
	I1209 11:31:21.728989  811348 cli_runner.go:164] Run: docker container inspect embed-certs-545509 --format={{.State.Running}}
	I1209 11:31:21.750758  811348 cli_runner.go:164] Run: docker container inspect embed-certs-545509 --format={{.State.Status}}
	I1209 11:31:21.789353  811348 cli_runner.go:164] Run: docker exec embed-certs-545509 stat /var/lib/dpkg/alternatives/iptables
	I1209 11:31:21.843884  811348 oci.go:144] the created container "embed-certs-545509" has a running status.
	I1209 11:31:21.843914  811348 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20068-586689/.minikube/machines/embed-certs-545509/id_rsa...
	I1209 11:31:22.070646  811348 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20068-586689/.minikube/machines/embed-certs-545509/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1209 11:31:22.101445  811348 cli_runner.go:164] Run: docker container inspect embed-certs-545509 --format={{.State.Status}}
	I1209 11:31:22.135193  811348 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1209 11:31:22.135213  811348 kic_runner.go:114] Args: [docker exec --privileged embed-certs-545509 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1209 11:31:22.195833  811348 cli_runner.go:164] Run: docker container inspect embed-certs-545509 --format={{.State.Status}}
	I1209 11:31:22.218411  811348 machine.go:93] provisionDockerMachine start ...
	I1209 11:31:22.218518  811348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-545509
	I1209 11:31:22.246110  811348 main.go:141] libmachine: Using SSH client type: native
	I1209 11:31:22.246376  811348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415f50] 0x418790 <nil>  [] 0s} 127.0.0.1 33812 <nil> <nil>}
	I1209 11:31:22.246385  811348 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 11:31:22.246995  811348 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38070->127.0.0.1:33812: read: connection reset by peer
	I1209 11:31:25.374425  811348 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-545509
	
	I1209 11:31:25.374448  811348 ubuntu.go:169] provisioning hostname "embed-certs-545509"
	I1209 11:31:25.374512  811348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-545509
	I1209 11:31:25.393078  811348 main.go:141] libmachine: Using SSH client type: native
	I1209 11:31:25.393458  811348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415f50] 0x418790 <nil>  [] 0s} 127.0.0.1 33812 <nil> <nil>}
	I1209 11:31:25.393482  811348 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-545509 && echo "embed-certs-545509" | sudo tee /etc/hostname
	I1209 11:31:25.531738  811348 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-545509
	
	I1209 11:31:25.531822  811348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-545509
	I1209 11:31:25.550856  811348 main.go:141] libmachine: Using SSH client type: native
	I1209 11:31:25.551119  811348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415f50] 0x418790 <nil>  [] 0s} 127.0.0.1 33812 <nil> <nil>}
	I1209 11:31:25.551142  811348 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-545509' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-545509/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-545509' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 11:31:25.677858  811348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 11:31:25.677885  811348 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20068-586689/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-586689/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-586689/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-586689/.minikube}
	I1209 11:31:25.677910  811348 ubuntu.go:177] setting up certificates
	I1209 11:31:25.677919  811348 provision.go:84] configureAuth start
	I1209 11:31:25.677986  811348 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-545509
	I1209 11:31:25.696258  811348 provision.go:143] copyHostCerts
	I1209 11:31:25.696335  811348 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-586689/.minikube/ca.pem, removing ...
	I1209 11:31:25.696345  811348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-586689/.minikube/ca.pem
	I1209 11:31:25.696427  811348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-586689/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-586689/.minikube/ca.pem (1078 bytes)
	I1209 11:31:25.696524  811348 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-586689/.minikube/cert.pem, removing ...
	I1209 11:31:25.696530  811348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-586689/.minikube/cert.pem
	I1209 11:31:25.696561  811348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-586689/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-586689/.minikube/cert.pem (1123 bytes)
	I1209 11:31:25.696624  811348 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-586689/.minikube/key.pem, removing ...
	I1209 11:31:25.696629  811348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-586689/.minikube/key.pem
	I1209 11:31:25.696652  811348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-586689/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-586689/.minikube/key.pem (1679 bytes)
	I1209 11:31:25.696697  811348 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-586689/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-586689/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-586689/.minikube/certs/ca-key.pem org=jenkins.embed-certs-545509 san=[127.0.0.1 192.168.76.2 embed-certs-545509 localhost minikube]
	I1209 11:31:26.422450  811348 provision.go:177] copyRemoteCerts
	I1209 11:31:26.422525  811348 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 11:31:26.422574  811348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-545509
	I1209 11:31:26.440421  811348 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/embed-certs-545509/id_rsa Username:docker}
	I1209 11:31:26.536007  811348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 11:31:26.571091  811348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1209 11:31:26.599400  811348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 11:31:26.626277  811348 provision.go:87] duration metric: took 948.344347ms to configureAuth
	I1209 11:31:26.626307  811348 ubuntu.go:193] setting minikube options for container-runtime
	I1209 11:31:26.626488  811348 config.go:182] Loaded profile config "embed-certs-545509": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1209 11:31:26.626503  811348 machine.go:96] duration metric: took 4.408072193s to provisionDockerMachine
	I1209 11:31:26.626510  811348 client.go:171] duration metric: took 10.836230736s to LocalClient.Create
	I1209 11:31:26.626524  811348 start.go:167] duration metric: took 10.836291709s to libmachine.API.Create "embed-certs-545509"
	I1209 11:31:26.626531  811348 start.go:293] postStartSetup for "embed-certs-545509" (driver="docker")
	I1209 11:31:26.626540  811348 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 11:31:26.626592  811348 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 11:31:26.626640  811348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-545509
	I1209 11:31:26.645080  811348 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/embed-certs-545509/id_rsa Username:docker}
	I1209 11:31:26.742716  811348 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 11:31:26.746489  811348 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 11:31:26.746531  811348 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1209 11:31:26.746543  811348 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1209 11:31:26.746550  811348 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1209 11:31:26.746564  811348 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-586689/.minikube/addons for local assets ...
	I1209 11:31:26.746623  811348 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-586689/.minikube/files for local assets ...
	I1209 11:31:26.746712  811348 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-586689/.minikube/files/etc/ssl/certs/5920802.pem -> 5920802.pem in /etc/ssl/certs
	I1209 11:31:26.746822  811348 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 11:31:26.757971  811348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/files/etc/ssl/certs/5920802.pem --> /etc/ssl/certs/5920802.pem (1708 bytes)
	I1209 11:31:26.783851  811348 start.go:296] duration metric: took 157.305438ms for postStartSetup
	I1209 11:31:26.784226  811348 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-545509
	I1209 11:31:26.800714  811348 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/config.json ...
	I1209 11:31:26.800994  811348 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 11:31:26.801037  811348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-545509
	I1209 11:31:26.817981  811348 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/embed-certs-545509/id_rsa Username:docker}
	I1209 11:31:26.906915  811348 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 11:31:26.911888  811348 start.go:128] duration metric: took 11.125437687s to createHost
	I1209 11:31:26.911914  811348 start.go:83] releasing machines lock for "embed-certs-545509", held for 11.125613698s
	I1209 11:31:26.911991  811348 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-545509
	I1209 11:31:26.929466  811348 ssh_runner.go:195] Run: cat /version.json
	I1209 11:31:26.929494  811348 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 11:31:26.929520  811348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-545509
	I1209 11:31:26.929563  811348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-545509
	I1209 11:31:26.949671  811348 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/embed-certs-545509/id_rsa Username:docker}
	I1209 11:31:26.952461  811348 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/embed-certs-545509/id_rsa Username:docker}
	I1209 11:31:27.037278  811348 ssh_runner.go:195] Run: systemctl --version
	I1209 11:31:27.176180  811348 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1209 11:31:27.180801  811348 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1209 11:31:27.211666  811348 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1209 11:31:27.211747  811348 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 11:31:27.249415  811348 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1209 11:31:27.249438  811348 start.go:495] detecting cgroup driver to use...
	I1209 11:31:27.249471  811348 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 11:31:27.249519  811348 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1209 11:31:27.262805  811348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 11:31:27.275464  811348 docker.go:217] disabling cri-docker service (if available) ...
	I1209 11:31:27.275533  811348 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 11:31:27.291817  811348 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 11:31:27.309859  811348 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 11:31:27.408651  811348 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 11:31:27.513402  811348 docker.go:233] disabling docker service ...
	I1209 11:31:27.513511  811348 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 11:31:27.535928  811348 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 11:31:27.548633  811348 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 11:31:27.644733  811348 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 11:31:27.737886  811348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 11:31:27.750512  811348 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 11:31:27.767670  811348 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1209 11:31:27.777999  811348 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1209 11:31:27.789858  811348 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1209 11:31:27.789941  811348 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1209 11:31:27.801637  811348 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 11:31:27.813406  811348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1209 11:31:27.825088  811348 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 11:31:27.836077  811348 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 11:31:27.846723  811348 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1209 11:31:27.857989  811348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1209 11:31:27.868882  811348 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1209 11:31:27.880009  811348 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 11:31:27.889409  811348 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 11:31:27.898379  811348 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:31:27.993526  811348 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1209 11:31:28.167600  811348 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1209 11:31:28.167718  811348 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1209 11:31:28.171976  811348 start.go:563] Will wait 60s for crictl version
	I1209 11:31:28.172058  811348 ssh_runner.go:195] Run: which crictl
	I1209 11:31:28.176037  811348 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 11:31:28.222921  811348 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I1209 11:31:28.223015  811348 ssh_runner.go:195] Run: containerd --version
	I1209 11:31:28.255129  811348 ssh_runner.go:195] Run: containerd --version
	I1209 11:31:28.284188  811348 out.go:177] * Preparing Kubernetes v1.31.2 on containerd 1.7.22 ...
	I1209 11:31:28.286585  811348 cli_runner.go:164] Run: docker network inspect embed-certs-545509 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 11:31:28.307472  811348 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1209 11:31:28.312062  811348 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:31:28.324563  811348 kubeadm.go:883] updating cluster {Name:embed-certs-545509 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-545509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 11:31:28.324686  811348 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
	I1209 11:31:28.324746  811348 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:31:28.364142  811348 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 11:31:28.364165  811348 containerd.go:534] Images already preloaded, skipping extraction
	I1209 11:31:28.364232  811348 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:31:28.405097  811348 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 11:31:28.405217  811348 cache_images.go:84] Images are preloaded, skipping loading
	I1209 11:31:28.405241  811348 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.31.2 containerd true true} ...
	I1209 11:31:28.405383  811348 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-545509 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-545509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 11:31:28.405494  811348 ssh_runner.go:195] Run: sudo crictl info
	I1209 11:31:28.444085  811348 cni.go:84] Creating CNI manager for ""
	I1209 11:31:28.444116  811348 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 11:31:28.444126  811348 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 11:31:28.444148  811348 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-545509 NodeName:embed-certs-545509 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 11:31:28.444266  811348 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-545509"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 11:31:28.444337  811348 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 11:31:28.456117  811348 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 11:31:28.456197  811348 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 11:31:28.467197  811348 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1209 11:31:28.487075  811348 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 11:31:28.511070  811348 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1209 11:31:28.531241  811348 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1209 11:31:28.535149  811348 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:31:28.547692  811348 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:31:28.641717  811348 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:31:28.657655  811348 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509 for IP: 192.168.76.2
	I1209 11:31:28.657680  811348 certs.go:194] generating shared ca certs ...
	I1209 11:31:28.657696  811348 certs.go:226] acquiring lock for ca certs: {Name:mkf9a6796a1bfe0d2ad344a1e9f65da735c51ff9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:31:28.657830  811348 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-586689/.minikube/ca.key
	I1209 11:31:28.657877  811348 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-586689/.minikube/proxy-client-ca.key
	I1209 11:31:28.657888  811348 certs.go:256] generating profile certs ...
	I1209 11:31:28.657941  811348 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/client.key
	I1209 11:31:28.657956  811348 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/client.crt with IP's: []
	I1209 11:31:28.782708  811348 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/client.crt ...
	I1209 11:31:28.782741  811348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/client.crt: {Name:mka364cd8d3839fdd6533d20e8d536d60e039f51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:31:28.782955  811348 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/client.key ...
	I1209 11:31:28.782972  811348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/client.key: {Name:mkbc8b7899b3bc89be7acc1f8207e69a33dbda78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:31:28.784523  811348 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/apiserver.key.4c36df8d
	I1209 11:31:28.784548  811348 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/apiserver.crt.4c36df8d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1209 11:31:29.419151  811348 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/apiserver.crt.4c36df8d ...
	I1209 11:31:29.419183  811348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/apiserver.crt.4c36df8d: {Name:mk92be70b93f0a8661973b22ba7ac43456a22b8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:31:29.419815  811348 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/apiserver.key.4c36df8d ...
	I1209 11:31:29.419837  811348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/apiserver.key.4c36df8d: {Name:mk9fd297115da815d1944e03426e9507db08a458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:31:29.420349  811348 certs.go:381] copying /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/apiserver.crt.4c36df8d -> /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/apiserver.crt
	I1209 11:31:29.420444  811348 certs.go:385] copying /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/apiserver.key.4c36df8d -> /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/apiserver.key
	I1209 11:31:29.420509  811348 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/proxy-client.key
	I1209 11:31:29.420532  811348 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/proxy-client.crt with IP's: []
	I1209 11:31:29.893361  811348 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/proxy-client.crt ...
	I1209 11:31:29.893392  811348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/proxy-client.crt: {Name:mk679fcdcb2745d458b20bf94d17dad4654aac98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:31:29.894046  811348 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/proxy-client.key ...
	I1209 11:31:29.894067  811348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/proxy-client.key: {Name:mk9997d254905b89b2d988644e4b4963149eede8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:31:29.894845  811348 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-586689/.minikube/certs/592080.pem (1338 bytes)
	W1209 11:31:29.894891  811348 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-586689/.minikube/certs/592080_empty.pem, impossibly tiny 0 bytes
	I1209 11:31:29.894904  811348 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-586689/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 11:31:29.894932  811348 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-586689/.minikube/certs/ca.pem (1078 bytes)
	I1209 11:31:29.894959  811348 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-586689/.minikube/certs/cert.pem (1123 bytes)
	I1209 11:31:29.894988  811348 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-586689/.minikube/certs/key.pem (1679 bytes)
	I1209 11:31:29.895034  811348 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-586689/.minikube/files/etc/ssl/certs/5920802.pem (1708 bytes)
	I1209 11:31:29.895678  811348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 11:31:29.927038  811348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 11:31:29.954215  811348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 11:31:29.982964  811348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 11:31:30.055108  811348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1209 11:31:30.093099  811348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 11:31:30.164474  811348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 11:31:30.202325  811348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 11:31:30.238147  811348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/files/etc/ssl/certs/5920802.pem --> /usr/share/ca-certificates/5920802.pem (1708 bytes)
	I1209 11:31:30.268999  811348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 11:31:30.297111  811348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/certs/592080.pem --> /usr/share/ca-certificates/592080.pem (1338 bytes)
	I1209 11:31:30.323640  811348 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 11:31:30.343677  811348 ssh_runner.go:195] Run: openssl version
	I1209 11:31:30.349726  811348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5920802.pem && ln -fs /usr/share/ca-certificates/5920802.pem /etc/ssl/certs/5920802.pem"
	I1209 11:31:30.360112  811348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5920802.pem
	I1209 11:31:30.365072  811348 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:44 /usr/share/ca-certificates/5920802.pem
	I1209 11:31:30.365176  811348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5920802.pem
	I1209 11:31:30.373018  811348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5920802.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 11:31:30.383167  811348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 11:31:30.393293  811348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:31:30.397307  811348 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:31:30.397432  811348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:31:30.404502  811348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 11:31:30.415291  811348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/592080.pem && ln -fs /usr/share/ca-certificates/592080.pem /etc/ssl/certs/592080.pem"
	I1209 11:31:30.425269  811348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/592080.pem
	I1209 11:31:30.429247  811348 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:44 /usr/share/ca-certificates/592080.pem
	I1209 11:31:30.429315  811348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/592080.pem
	I1209 11:31:30.437056  811348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/592080.pem /etc/ssl/certs/51391683.0"
	I1209 11:31:30.447450  811348 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 11:31:30.451138  811348 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 11:31:30.451204  811348 kubeadm.go:392] StartCluster: {Name:embed-certs-545509 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-545509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:31:30.451296  811348 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1209 11:31:30.451362  811348 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:31:30.491466  811348 cri.go:89] found id: ""
	I1209 11:31:30.491541  811348 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 11:31:30.501210  811348 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:31:30.511029  811348 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1209 11:31:30.511128  811348 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:31:30.521307  811348 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:31:30.521329  811348 kubeadm.go:157] found existing configuration files:
	
	I1209 11:31:30.521412  811348 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:31:30.531287  811348 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:31:30.531423  811348 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:31:30.541088  811348 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:31:30.555788  811348 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:31:30.555879  811348 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:31:30.566424  811348 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:31:30.578106  811348 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:31:30.578169  811348 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:31:30.587370  811348 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:31:30.597057  811348 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:31:30.597200  811348 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:31:30.606883  811348 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 11:31:30.652403  811348 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1209 11:31:30.652753  811348 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 11:31:30.681234  811348 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1209 11:31:30.681403  811348 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1072-aws
	I1209 11:31:30.681477  811348 kubeadm.go:310] OS: Linux
	I1209 11:31:30.681598  811348 kubeadm.go:310] CGROUPS_CPU: enabled
	I1209 11:31:30.681680  811348 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1209 11:31:30.681743  811348 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1209 11:31:30.681801  811348 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1209 11:31:30.681857  811348 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1209 11:31:30.681924  811348 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1209 11:31:30.681976  811348 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1209 11:31:30.682035  811348 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1209 11:31:30.682088  811348 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1209 11:31:30.745365  811348 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 11:31:30.745483  811348 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 11:31:30.745584  811348 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 11:31:30.752208  811348 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 11:31:30.876747  800461 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1209 11:31:30.888585  800461 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1209 11:31:30.891227  800461 out.go:201] 
	W1209 11:31:30.893313  800461 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W1209 11:31:30.893353  800461 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W1209 11:31:30.893369  800461 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W1209 11:31:30.893375  800461 out.go:270] * 
	W1209 11:31:30.894594  800461 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 11:31:30.897267  800461 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	c302b0a613922       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   3458921e081b3       dashboard-metrics-scraper-8d5bb5db8-96bls
	663485e631397       ba04bb24b9575       4 minutes ago       Running             storage-provisioner         3                   f043fdaf31a38       storage-provisioner
	b5b12d4047e89       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   746b50167f421       kubernetes-dashboard-cd95d586-lgxbj
	af856b017afeb       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   460be1a0f8119       coredns-74ff55c5b-pll5n
	119643b0c4cdc       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   e5a7568aeb256       busybox
	91cb2ed43dfec       2be0bcf609c65       5 minutes ago       Running             kindnet-cni                 1                   230d93a98bc51       kindnet-82lzl
	167b84f8f987c       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   2bd4720d903eb       kube-proxy-nftmg
	1c864e51ed369       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         2                   f043fdaf31a38       storage-provisioner
	ae41a05fd4b11       e7605f88f17d6       5 minutes ago       Running             kube-scheduler              1                   fd9b36cd3d8ba       kube-scheduler-old-k8s-version-623695
	c841cccf0a5bd       05b738aa1bc63       5 minutes ago       Running             etcd                        1                   7a22b2c66dc58       etcd-old-k8s-version-623695
	8afdcb5ba074e       1df8a2b116bd1       5 minutes ago       Running             kube-controller-manager     1                   9ae7792a3925c       kube-controller-manager-old-k8s-version-623695
	92b50938c8b97       2c08bbbc02d3a       5 minutes ago       Running             kube-apiserver              1                   5b29aa148a7e9       kube-apiserver-old-k8s-version-623695
	d9234929f4e6d       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   162f9d9b1ade1       busybox
	ed42b2a1e21f5       db91994f4ee8f       7 minutes ago       Exited              coredns                     0                   4761058ca1e45       coredns-74ff55c5b-pll5n
	eb174547e0773       2be0bcf609c65       8 minutes ago       Exited              kindnet-cni                 0                   97745075a569e       kindnet-82lzl
	a9ed84f8cfb61       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   89921cf3dfc37       kube-proxy-nftmg
	25fc7bce15ad2       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   897deba6c7c2f       kube-controller-manager-old-k8s-version-623695
	0a9c5fc2481f8       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   03db049c5ebd9       kube-scheduler-old-k8s-version-623695
	8f5f2eca7e918       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   c85de1d1210e5       kube-apiserver-old-k8s-version-623695
	2b8b97c8ef833       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   b8a4462606bd8       etcd-old-k8s-version-623695
	
	
	==> containerd <==
	Dec 09 11:27:46 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:27:46.590347304Z" level=info msg="CreateContainer within sandbox \"3458921e081b390dcd48735929af3f8fdab4debf680f4d0f6aa078cf68e9316d\" for name:\"dashboard-metrics-scraper\"  attempt:4 returns container id \"b6ea58f85fa6c9fe28b9d5ab62a4632f105050afa91bff4a8cdbc9f1e49b1541\""
	Dec 09 11:27:46 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:27:46.592478591Z" level=info msg="StartContainer for \"b6ea58f85fa6c9fe28b9d5ab62a4632f105050afa91bff4a8cdbc9f1e49b1541\""
	Dec 09 11:27:46 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:27:46.678096054Z" level=info msg="StartContainer for \"b6ea58f85fa6c9fe28b9d5ab62a4632f105050afa91bff4a8cdbc9f1e49b1541\" returns successfully"
	Dec 09 11:27:46 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:27:46.717679570Z" level=info msg="shim disconnected" id=b6ea58f85fa6c9fe28b9d5ab62a4632f105050afa91bff4a8cdbc9f1e49b1541 namespace=k8s.io
	Dec 09 11:27:46 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:27:46.717740585Z" level=warning msg="cleaning up after shim disconnected" id=b6ea58f85fa6c9fe28b9d5ab62a4632f105050afa91bff4a8cdbc9f1e49b1541 namespace=k8s.io
	Dec 09 11:27:46 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:27:46.717750726Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Dec 09 11:27:47 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:27:47.283266462Z" level=info msg="RemoveContainer for \"e30bb1351656155c92a907fc07957340c1070203fe09cbeb70c1c6d72613432f\""
	Dec 09 11:27:47 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:27:47.291480493Z" level=info msg="RemoveContainer for \"e30bb1351656155c92a907fc07957340c1070203fe09cbeb70c1c6d72613432f\" returns successfully"
	Dec 09 11:28:38 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:28:38.548822044Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 09 11:28:38 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:28:38.557684534Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Dec 09 11:28:38 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:28:38.559740144Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Dec 09 11:28:38 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:28:38.559851629Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Dec 09 11:29:09 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:29:09.548863527Z" level=info msg="CreateContainer within sandbox \"3458921e081b390dcd48735929af3f8fdab4debf680f4d0f6aa078cf68e9316d\" for container name:\"dashboard-metrics-scraper\"  attempt:5"
	Dec 09 11:29:09 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:29:09.566297896Z" level=info msg="CreateContainer within sandbox \"3458921e081b390dcd48735929af3f8fdab4debf680f4d0f6aa078cf68e9316d\" for name:\"dashboard-metrics-scraper\"  attempt:5 returns container id \"c302b0a61392269d6463f3b12b8b9aad584620cc2a632866b4071902dbe225fc\""
	Dec 09 11:29:09 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:29:09.567171314Z" level=info msg="StartContainer for \"c302b0a61392269d6463f3b12b8b9aad584620cc2a632866b4071902dbe225fc\""
	Dec 09 11:29:09 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:29:09.659611122Z" level=info msg="StartContainer for \"c302b0a61392269d6463f3b12b8b9aad584620cc2a632866b4071902dbe225fc\" returns successfully"
	Dec 09 11:29:09 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:29:09.686274729Z" level=info msg="shim disconnected" id=c302b0a61392269d6463f3b12b8b9aad584620cc2a632866b4071902dbe225fc namespace=k8s.io
	Dec 09 11:29:09 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:29:09.686336990Z" level=warning msg="cleaning up after shim disconnected" id=c302b0a61392269d6463f3b12b8b9aad584620cc2a632866b4071902dbe225fc namespace=k8s.io
	Dec 09 11:29:09 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:29:09.686347124Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Dec 09 11:29:10 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:29:10.510562960Z" level=info msg="RemoveContainer for \"b6ea58f85fa6c9fe28b9d5ab62a4632f105050afa91bff4a8cdbc9f1e49b1541\""
	Dec 09 11:29:10 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:29:10.516177974Z" level=info msg="RemoveContainer for \"b6ea58f85fa6c9fe28b9d5ab62a4632f105050afa91bff4a8cdbc9f1e49b1541\" returns successfully"
	Dec 09 11:31:18 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:31:18.578641271Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 09 11:31:18 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:31:18.585964147Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Dec 09 11:31:18 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:31:18.587709669Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Dec 09 11:31:18 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:31:18.587810126Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [af856b017afebc2eff0a52c30b40b9ae7d18fbed9646a2cd62795c30eae0a468] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:34230 - 19146 "HINFO IN 5414614461809052501.8755579344058806465. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015021828s
	
	
	==> coredns [ed42b2a1e21f5bc2262274dce47ab17b0351c70af13f20f07756dca1fc28f478] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:41342 - 38037 "HINFO IN 8371817319472522321.4202090601241178016. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011448122s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-623695
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-623695
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c
	                    minikube.k8s.io/name=old-k8s-version-623695
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_09T11_22_51_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 11:22:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-623695
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 11:31:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 11:26:39 +0000   Mon, 09 Dec 2024 11:22:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 11:26:39 +0000   Mon, 09 Dec 2024 11:22:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 11:26:39 +0000   Mon, 09 Dec 2024 11:22:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 11:26:39 +0000   Mon, 09 Dec 2024 11:23:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-623695
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 f19115236a0b4b3092ac588db40ca2b7
	  System UUID:                b96e3903-3a11-4691-9f38-ea41a76f2123
	  Boot ID:                    5eb73f75-e518-45c7-ab7b-f59a572ccc61
	  Kernel Version:             5.15.0-1072-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 coredns-74ff55c5b-pll5n                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m26s
	  kube-system                 etcd-old-k8s-version-623695                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m34s
	  kube-system                 kindnet-82lzl                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m26s
	  kube-system                 kube-apiserver-old-k8s-version-623695             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m34s
	  kube-system                 kube-controller-manager-old-k8s-version-623695    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m34s
	  kube-system                 kube-proxy-nftmg                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m26s
	  kube-system                 kube-scheduler-old-k8s-version-623695             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m34s
	  kube-system                 metrics-server-9975d5f86-9pw69                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m24s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m24s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-96bls         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m26s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-lgxbj               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m54s (x5 over 8m54s)  kubelet     Node old-k8s-version-623695 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m54s (x5 over 8m54s)  kubelet     Node old-k8s-version-623695 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m54s (x4 over 8m54s)  kubelet     Node old-k8s-version-623695 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m34s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m34s                  kubelet     Node old-k8s-version-623695 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m34s                  kubelet     Node old-k8s-version-623695 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m34s                  kubelet     Node old-k8s-version-623695 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m34s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m26s                  kubelet     Node old-k8s-version-623695 status is now: NodeReady
	  Normal  Starting                 8m23s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 5m55s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m55s (x8 over 5m55s)  kubelet     Node old-k8s-version-623695 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m55s (x8 over 5m55s)  kubelet     Node old-k8s-version-623695 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m55s (x7 over 5m55s)  kubelet     Node old-k8s-version-623695 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m55s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m41s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	
	
	==> etcd [2b8b97c8ef833f12b7f54ccb1f4f9e43ee5e96f62f8bb9ee22ef50e58e3ea81f] <==
	raft2024/12/09 11:22:40 INFO: 9f0758e1c58a86ed became candidate at term 2
	raft2024/12/09 11:22:40 INFO: 9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2
	raft2024/12/09 11:22:40 INFO: 9f0758e1c58a86ed became leader at term 2
	raft2024/12/09 11:22:40 INFO: raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2
	2024-12-09 11:22:40.747196 I | etcdserver: published {Name:old-k8s-version-623695 ClientURLs:[https://192.168.85.2:2379]} to cluster 68eaea490fab4e05
	2024-12-09 11:22:40.747702 I | embed: ready to serve client requests
	2024-12-09 11:22:40.747873 I | embed: ready to serve client requests
	2024-12-09 11:22:40.747995 I | etcdserver: setting up the initial cluster version to 3.4
	2024-12-09 11:22:40.748823 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-12-09 11:22:40.748933 I | etcdserver/api: enabled capabilities for version 3.4
	2024-12-09 11:22:40.765189 I | embed: serving client requests on 127.0.0.1:2379
	2024-12-09 11:22:40.765711 I | embed: serving client requests on 192.168.85.2:2379
	2024-12-09 11:23:09.412670 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 11:23:09.609870 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 11:23:19.616096 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 11:23:29.609747 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 11:23:39.609903 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 11:23:49.609789 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 11:23:59.610005 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 11:24:09.609834 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 11:24:19.609865 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 11:24:29.609818 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 11:24:39.609822 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 11:24:49.610132 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 11:24:59.610619 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [c841cccf0a5bdd1fe2fca556e5e36b880a686f41d9146ef7788f247bce61a9d5] <==
	2024-12-09 11:27:29.016310 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 11:27:39.016446 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 11:27:49.016550 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 11:27:59.016445 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 11:28:09.016378 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 11:28:19.016212 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 11:28:29.016464 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 11:28:39.016490 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 11:28:49.016397 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 11:28:59.016311 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 11:29:09.016600 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 11:29:19.016554 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 11:29:29.016299 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 11:29:39.016393 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 11:29:49.016423 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 11:29:59.016436 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 11:30:09.016615 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 11:30:19.016452 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 11:30:29.016559 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 11:30:39.016330 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 11:30:49.016233 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 11:30:59.016332 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 11:31:09.022211 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 11:31:19.022051 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 11:31:29.016753 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 11:31:33 up  4:14,  0 users,  load average: 1.99, 3.03, 3.14
	Linux old-k8s-version-623695 5.15.0-1072-aws #78~20.04.1-Ubuntu SMP Wed Oct 9 15:29:54 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [91cb2ed43dfec514bb35f6e6633f1b7bc69390d1eb0c98d7fd830339fc56e9e3] <==
	I1209 11:29:32.829955       1 main.go:301] handling current node
	I1209 11:29:42.830765       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1209 11:29:42.830802       1 main.go:301] handling current node
	I1209 11:29:52.823506       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1209 11:29:52.823539       1 main.go:301] handling current node
	I1209 11:30:02.829272       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1209 11:30:02.829309       1 main.go:301] handling current node
	I1209 11:30:12.830813       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1209 11:30:12.830848       1 main.go:301] handling current node
	I1209 11:30:22.829221       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1209 11:30:22.829257       1 main.go:301] handling current node
	I1209 11:30:32.830368       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1209 11:30:32.830407       1 main.go:301] handling current node
	I1209 11:30:42.830556       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1209 11:30:42.830595       1 main.go:301] handling current node
	I1209 11:30:52.823419       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1209 11:30:52.823460       1 main.go:301] handling current node
	I1209 11:31:02.829931       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1209 11:31:02.829970       1 main.go:301] handling current node
	I1209 11:31:12.829217       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1209 11:31:12.829334       1 main.go:301] handling current node
	I1209 11:31:22.831170       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1209 11:31:22.831207       1 main.go:301] handling current node
	I1209 11:31:32.831423       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1209 11:31:32.831461       1 main.go:301] handling current node
	
	
	==> kindnet [eb174547e077389a60898a71aa5ab17ac0877a9aa4886acc78d7d9454ff6ec44] <==
	I1209 11:23:11.805276       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I1209 11:23:11.805305       1 metrics.go:61] Registering metrics
	I1209 11:23:11.805348       1 controller.go:401] Syncing nftables rules
	I1209 11:23:21.511652       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1209 11:23:21.511716       1 main.go:301] handling current node
	I1209 11:23:31.502757       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1209 11:23:31.502796       1 main.go:301] handling current node
	I1209 11:23:41.502275       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1209 11:23:41.502313       1 main.go:301] handling current node
	I1209 11:23:51.508727       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1209 11:23:51.508840       1 main.go:301] handling current node
	I1209 11:24:01.510833       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1209 11:24:01.510872       1 main.go:301] handling current node
	I1209 11:24:11.503190       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1209 11:24:11.503300       1 main.go:301] handling current node
	I1209 11:24:21.504411       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1209 11:24:21.504451       1 main.go:301] handling current node
	I1209 11:24:31.509752       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1209 11:24:31.509791       1 main.go:301] handling current node
	I1209 11:24:41.509243       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1209 11:24:41.509280       1 main.go:301] handling current node
	I1209 11:24:51.510464       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1209 11:24:51.510598       1 main.go:301] handling current node
	I1209 11:25:01.502783       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1209 11:25:01.502818       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8f5f2eca7e918147351695f83e907d55498a88bcb6c566f93410605398939265] <==
	I1209 11:22:48.503011       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1209 11:22:48.503168       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1209 11:22:48.541036       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I1209 11:22:48.545830       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I1209 11:22:48.545857       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I1209 11:22:49.084246       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1209 11:22:49.131108       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1209 11:22:49.259811       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1209 11:22:49.262136       1 controller.go:606] quota admission added evaluator for: endpoints
	I1209 11:22:49.266872       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1209 11:22:50.304616       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I1209 11:22:51.019541       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I1209 11:22:51.088983       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I1209 11:22:59.513713       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I1209 11:23:07.637132       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I1209 11:23:07.798006       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I1209 11:23:15.796932       1 client.go:360] parsed scheme: "passthrough"
	I1209 11:23:15.796977       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1209 11:23:15.796986       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1209 11:23:56.690760       1 client.go:360] parsed scheme: "passthrough"
	I1209 11:23:56.690807       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1209 11:23:56.690817       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1209 11:24:31.929626       1 client.go:360] parsed scheme: "passthrough"
	I1209 11:24:31.929676       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1209 11:24:31.929685       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [92b50938c8b97d22b97b90d7211713e83c2e408b0a5bfa70a5b801a11893dfe2] <==
	I1209 11:28:15.849479       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1209 11:28:15.849487       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W1209 11:28:52.221165       1 handler_proxy.go:102] no RequestInfo found in the context
	E1209 11:28:52.221439       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1209 11:28:52.221455       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1209 11:28:57.463042       1 client.go:360] parsed scheme: "passthrough"
	I1209 11:28:57.463088       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1209 11:28:57.463097       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1209 11:29:38.108585       1 client.go:360] parsed scheme: "passthrough"
	I1209 11:29:38.108639       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1209 11:29:38.108677       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1209 11:30:15.706643       1 client.go:360] parsed scheme: "passthrough"
	I1209 11:30:15.706683       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1209 11:30:15.706692       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W1209 11:30:50.571896       1 handler_proxy.go:102] no RequestInfo found in the context
	E1209 11:30:50.571969       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1209 11:30:50.571979       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1209 11:30:58.165917       1 client.go:360] parsed scheme: "passthrough"
	I1209 11:30:58.166159       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1209 11:30:58.166237       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1209 11:31:30.803352       1 client.go:360] parsed scheme: "passthrough"
	I1209 11:31:30.803409       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1209 11:31:30.803418       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [25fc7bce15ad22f8de3baaeb0bb7f687bb2b3dc94e08ab8eb20977713310736b] <==
	I1209 11:23:07.610131       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I1209 11:23:07.610877       1 event.go:291] "Event occurred" object="old-k8s-version-623695" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-623695 event: Registered Node old-k8s-version-623695 in Controller"
	I1209 11:23:07.660911       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I1209 11:23:07.682472       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-623695" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1209 11:23:07.692007       1 shared_informer.go:247] Caches are synced for resource quota 
	E1209 11:23:07.702607       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	I1209 11:23:07.703597       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-vc5sr"
	E1209 11:23:07.710915       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
	I1209 11:23:07.728655       1 shared_informer.go:247] Caches are synced for daemon sets 
	I1209 11:23:07.729514       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-pll5n"
	I1209 11:23:07.748346       1 shared_informer.go:247] Caches are synced for stateful set 
	I1209 11:23:07.758223       1 shared_informer.go:247] Caches are synced for resource quota 
	I1209 11:23:07.799767       1 shared_informer.go:247] Caches are synced for attach detach 
	I1209 11:23:07.842928       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-82lzl"
	I1209 11:23:07.843161       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-nftmg"
	I1209 11:23:07.943393       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	E1209 11:23:07.973322       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"5e46648a-9d67-4c9b-8708-582b05ba991c", ResourceVersion:"277", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63869340171, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20241108-5c6d2daf\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\
":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4000f65e00), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4000f65e20)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4000f65e40), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string
{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000f65e60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil),
FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000f65e80), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Glust
erfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000f65ea0), EmptyDi
r:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil),
PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20241108-5c6d2daf", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4000f65ec0)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4000f65f00)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:
0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:
(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40005786c0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000d7fda8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000accfc0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}},
HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400000f820)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000d7fdf0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I1209 11:23:08.243533       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1209 11:23:08.247616       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1209 11:23:08.247654       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1209 11:23:09.154781       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I1209 11:23:09.202166       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-vc5sr"
	I1209 11:23:12.610023       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1209 11:25:08.248349       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	E1209 11:25:08.473379       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-controller-manager [8afdcb5ba074eac7202e301ff2b67765f72a8dd4e47a0100cf89910df022caaa] <==
	W1209 11:27:13.253225       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1209 11:27:39.292675       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1209 11:27:44.903670       1 request.go:655] Throttling request took 1.048444704s, request: GET:https://192.168.85.2:8443/apis/scheduling.k8s.io/v1beta1?timeout=32s
	W1209 11:27:45.755451       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1209 11:28:09.794794       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1209 11:28:17.405994       1 request.go:655] Throttling request took 1.048015397s, request: GET:https://192.168.85.2:8443/apis/apps/v1?timeout=32s
	W1209 11:28:18.257706       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1209 11:28:40.296656       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1209 11:28:49.908208       1 request.go:655] Throttling request took 1.048174157s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W1209 11:28:50.815702       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1209 11:29:10.798744       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1209 11:29:22.466288       1 request.go:655] Throttling request took 1.048437012s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
	W1209 11:29:23.317791       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1209 11:29:41.300607       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1209 11:29:54.968456       1 request.go:655] Throttling request took 1.048379183s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W1209 11:29:55.820057       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1209 11:30:11.803012       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1209 11:30:27.470690       1 request.go:655] Throttling request took 1.048384419s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
	W1209 11:30:28.322316       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1209 11:30:42.306651       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1209 11:30:59.972790       1 request.go:655] Throttling request took 1.048360905s, request: GET:https://192.168.85.2:8443/apis/node.k8s.io/v1beta1?timeout=32s
	W1209 11:31:00.824339       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1209 11:31:12.810442       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1209 11:31:32.474803       1 request.go:655] Throttling request took 1.047353865s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
	W1209 11:31:33.326312       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-proxy [167b84f8f987c313cf51f0bcb5bd6488b6ad7b450cd789460a00b513af86bb98] <==
	I1209 11:25:52.439850       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I1209 11:25:52.439947       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W1209 11:25:52.478250       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I1209 11:25:52.478409       1 server_others.go:185] Using iptables Proxier.
	I1209 11:25:52.478927       1 server.go:650] Version: v1.20.0
	I1209 11:25:52.486176       1 config.go:224] Starting endpoint slice config controller
	I1209 11:25:52.486240       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1209 11:25:52.486305       1 config.go:315] Starting service config controller
	I1209 11:25:52.486309       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1209 11:25:52.586444       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I1209 11:25:52.586963       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-proxy [a9ed84f8cfb61c9caff73d1ed13931a69b4186cabc325c3cc1809699aa1be74e] <==
	I1209 11:23:10.235776       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I1209 11:23:10.235882       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W1209 11:23:10.266372       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I1209 11:23:10.266506       1 server_others.go:185] Using iptables Proxier.
	I1209 11:23:10.266745       1 server.go:650] Version: v1.20.0
	I1209 11:23:10.267194       1 config.go:315] Starting service config controller
	I1209 11:23:10.267206       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1209 11:23:10.271620       1 config.go:224] Starting endpoint slice config controller
	I1209 11:23:10.271646       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1209 11:23:10.367341       1 shared_informer.go:247] Caches are synced for service config 
	I1209 11:23:10.371877       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [0a9c5fc2481f8b0b984d6634dcbb9354b762189dacc2253907afe96468b89d47] <==
	I1209 11:22:43.500659       1 serving.go:331] Generated self-signed cert in-memory
	W1209 11:22:47.677424       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1209 11:22:47.677522       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1209 11:22:47.677553       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1209 11:22:47.677599       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1209 11:22:47.760482       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I1209 11:22:47.764739       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1209 11:22:47.764811       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1209 11:22:47.764847       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1209 11:22:47.792689       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1209 11:22:47.800362       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1209 11:22:47.801777       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1209 11:22:47.802081       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1209 11:22:47.804474       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1209 11:22:47.804537       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1209 11:22:47.804881       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1209 11:22:47.805253       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1209 11:22:47.805674       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1209 11:22:47.805937       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1209 11:22:47.806943       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1209 11:22:47.825093       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1209 11:22:48.817461       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1209 11:22:49.079903       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1209 11:22:52.264894       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [ae41a05fd4b1117afac3aa1f7764574fd0ed6427f9d72dc4841a9f2d948eeec3] <==
	I1209 11:25:44.156405       1 serving.go:331] Generated self-signed cert in-memory
	W1209 11:25:49.538567       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1209 11:25:49.538763       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1209 11:25:49.538835       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1209 11:25:49.538952       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1209 11:25:49.840908       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1209 11:25:49.840933       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1209 11:25:49.843350       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I1209 11:25:49.846025       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I1209 11:25:50.046345       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Dec 09 11:30:09 old-k8s-version-623695 kubelet[663]: E1209 11:30:09.546239     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	Dec 09 11:30:11 old-k8s-version-623695 kubelet[663]: E1209 11:30:11.546522     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 09 11:30:20 old-k8s-version-623695 kubelet[663]: I1209 11:30:20.546126     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: c302b0a61392269d6463f3b12b8b9aad584620cc2a632866b4071902dbe225fc
	Dec 09 11:30:20 old-k8s-version-623695 kubelet[663]: E1209 11:30:20.547174     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	Dec 09 11:30:26 old-k8s-version-623695 kubelet[663]: E1209 11:30:26.547093     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 09 11:30:33 old-k8s-version-623695 kubelet[663]: I1209 11:30:33.545850     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: c302b0a61392269d6463f3b12b8b9aad584620cc2a632866b4071902dbe225fc
	Dec 09 11:30:33 old-k8s-version-623695 kubelet[663]: E1209 11:30:33.546231     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	Dec 09 11:30:39 old-k8s-version-623695 kubelet[663]: E1209 11:30:39.546660     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 09 11:30:44 old-k8s-version-623695 kubelet[663]: I1209 11:30:44.545981     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: c302b0a61392269d6463f3b12b8b9aad584620cc2a632866b4071902dbe225fc
	Dec 09 11:30:44 old-k8s-version-623695 kubelet[663]: E1209 11:30:44.548327     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	Dec 09 11:30:50 old-k8s-version-623695 kubelet[663]: E1209 11:30:50.546557     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 09 11:30:56 old-k8s-version-623695 kubelet[663]: I1209 11:30:56.545987     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: c302b0a61392269d6463f3b12b8b9aad584620cc2a632866b4071902dbe225fc
	Dec 09 11:30:56 old-k8s-version-623695 kubelet[663]: E1209 11:30:56.546828     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	Dec 09 11:31:05 old-k8s-version-623695 kubelet[663]: E1209 11:31:05.546612     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 09 11:31:07 old-k8s-version-623695 kubelet[663]: I1209 11:31:07.546697     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: c302b0a61392269d6463f3b12b8b9aad584620cc2a632866b4071902dbe225fc
	Dec 09 11:31:07 old-k8s-version-623695 kubelet[663]: E1209 11:31:07.547033     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	Dec 09 11:31:18 old-k8s-version-623695 kubelet[663]: I1209 11:31:18.546437     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: c302b0a61392269d6463f3b12b8b9aad584620cc2a632866b4071902dbe225fc
	Dec 09 11:31:18 old-k8s-version-623695 kubelet[663]: E1209 11:31:18.546764     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	Dec 09 11:31:18 old-k8s-version-623695 kubelet[663]: E1209 11:31:18.587963     663 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Dec 09 11:31:18 old-k8s-version-623695 kubelet[663]: E1209 11:31:18.588010     663 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Dec 09 11:31:18 old-k8s-version-623695 kubelet[663]: E1209 11:31:18.588143     663 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-hcpl8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-9pw69_kube-system(827755a
c-0a74-439e-ac59-ea593199e1de): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Dec 09 11:31:18 old-k8s-version-623695 kubelet[663]: E1209 11:31:18.588176     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Dec 09 11:31:33 old-k8s-version-623695 kubelet[663]: I1209 11:31:33.546034     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: c302b0a61392269d6463f3b12b8b9aad584620cc2a632866b4071902dbe225fc
	Dec 09 11:31:33 old-k8s-version-623695 kubelet[663]: E1209 11:31:33.546430     663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
	Dec 09 11:31:33 old-k8s-version-623695 kubelet[663]: E1209 11:31:33.554210     663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> kubernetes-dashboard [b5b12d4047e8983ca175dce7ad530195e42f6c97736f3423b56ba4b67eee4cfd] <==
	2024/12/09 11:26:17 Using namespace: kubernetes-dashboard
	2024/12/09 11:26:17 Using in-cluster config to connect to apiserver
	2024/12/09 11:26:17 Using secret token for csrf signing
	2024/12/09 11:26:17 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/12/09 11:26:17 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/12/09 11:26:17 Successful initial request to the apiserver, version: v1.20.0
	2024/12/09 11:26:17 Generating JWE encryption key
	2024/12/09 11:26:17 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/12/09 11:26:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/12/09 11:26:17 Initializing JWE encryption key from synchronized object
	2024/12/09 11:26:17 Creating in-cluster Sidecar client
	2024/12/09 11:26:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/09 11:26:17 Serving insecurely on HTTP port: 9090
	2024/12/09 11:26:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/09 11:27:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/09 11:27:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/09 11:28:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/09 11:28:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/09 11:29:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/09 11:29:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/09 11:30:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/09 11:30:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/09 11:31:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/09 11:26:17 Starting overwatch
	
	
	==> storage-provisioner [1c864e51ed369180554c763558a548ef00ea435a92cc98a093c1261e29cf2bbf] <==
	I1209 11:25:51.788405       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1209 11:26:21.791462       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [663485e6313972a7cfcbedd678323ecb2f884257c37073d5b2e988fad2ae5465] <==
	I1209 11:26:33.718755       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1209 11:26:33.757611       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1209 11:26:33.757680       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1209 11:26:51.271686       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1209 11:26:51.278884       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-623695_172be10f-068d-4c38-abc5-2e361e4bd04d!
	I1209 11:26:51.279362       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"817f54db-309c-430e-9890-d82edeb3c4de", APIVersion:"v1", ResourceVersion:"840", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-623695_172be10f-068d-4c38-abc5-2e361e4bd04d became leader
	I1209 11:26:51.385020       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-623695_172be10f-068d-4c38-abc5-2e361e4bd04d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-623695 -n old-k8s-version-623695
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-623695 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-9pw69
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-623695 describe pod metrics-server-9975d5f86-9pw69
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-623695 describe pod metrics-server-9975d5f86-9pw69: exit status 1 (145.690003ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-9pw69" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-623695 describe pod metrics-server-9975d5f86-9pw69: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (373.16s)

                                                
                                    

Test pass (300/330)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 6.77
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.31.2/json-events 5.32
13 TestDownloadOnly/v1.31.2/preload-exists 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.08
18 TestDownloadOnly/v1.31.2/DeleteAll 0.24
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.55
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 216.58
29 TestAddons/serial/Volcano 40.96
31 TestAddons/serial/GCPAuth/Namespaces 0.19
32 TestAddons/serial/GCPAuth/FakeCredentials 8.92
35 TestAddons/parallel/Registry 15.35
36 TestAddons/parallel/Ingress 21.5
37 TestAddons/parallel/InspektorGadget 10.91
38 TestAddons/parallel/MetricsServer 5.84
40 TestAddons/parallel/CSI 52.47
41 TestAddons/parallel/Headlamp 17.56
42 TestAddons/parallel/CloudSpanner 6.71
43 TestAddons/parallel/LocalPath 9.94
44 TestAddons/parallel/NvidiaDevicePlugin 5.67
45 TestAddons/parallel/Yakd 11.86
47 TestAddons/StoppedEnableDisable 12.27
48 TestCertOptions 37.89
49 TestCertExpiration 233.61
51 TestForceSystemdFlag 50.05
52 TestForceSystemdEnv 42.62
53 TestDockerEnvContainerd 46.66
58 TestErrorSpam/setup 29.43
59 TestErrorSpam/start 0.73
60 TestErrorSpam/status 1.08
61 TestErrorSpam/pause 1.85
62 TestErrorSpam/unpause 1.9
63 TestErrorSpam/stop 1.49
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 51.54
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.84
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.12
74 TestFunctional/serial/CacheCmd/cache/add_remote 4.24
75 TestFunctional/serial/CacheCmd/cache/add_local 1.31
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.1
80 TestFunctional/serial/CacheCmd/cache/delete 0.13
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.16
83 TestFunctional/serial/ExtraConfig 47.96
84 TestFunctional/serial/ComponentHealth 0.12
85 TestFunctional/serial/LogsCmd 1.74
86 TestFunctional/serial/LogsFileCmd 1.73
87 TestFunctional/serial/InvalidService 4.93
89 TestFunctional/parallel/ConfigCmd 0.52
90 TestFunctional/parallel/DashboardCmd 14.62
91 TestFunctional/parallel/DryRun 0.49
92 TestFunctional/parallel/InternationalLanguage 0.21
93 TestFunctional/parallel/StatusCmd 1.09
97 TestFunctional/parallel/ServiceCmdConnect 9.94
98 TestFunctional/parallel/AddonsCmd 0.15
99 TestFunctional/parallel/PersistentVolumeClaim 26.21
101 TestFunctional/parallel/SSHCmd 0.67
102 TestFunctional/parallel/CpCmd 2.45
104 TestFunctional/parallel/FileSync 0.37
105 TestFunctional/parallel/CertSync 2.08
109 TestFunctional/parallel/NodeLabels 0.11
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.57
113 TestFunctional/parallel/License 0.24
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.69
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.48
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/ServiceCmd/DeployApp 7.37
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.48
127 TestFunctional/parallel/ServiceCmd/List 0.69
128 TestFunctional/parallel/ProfileCmd/profile_list 0.53
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.47
130 TestFunctional/parallel/ServiceCmd/JSONOutput 0.64
131 TestFunctional/parallel/MountCmd/any-port 8.08
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.71
133 TestFunctional/parallel/ServiceCmd/Format 0.75
134 TestFunctional/parallel/ServiceCmd/URL 0.64
135 TestFunctional/parallel/MountCmd/specific-port 2.13
136 TestFunctional/parallel/MountCmd/VerifyCleanup 2.47
137 TestFunctional/parallel/Version/short 0.1
138 TestFunctional/parallel/Version/components 1.37
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
143 TestFunctional/parallel/ImageCommands/ImageBuild 4.08
144 TestFunctional/parallel/ImageCommands/Setup 0.63
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.2
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.11
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.42
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.4
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
150 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
151 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
152 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.18
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.81
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.5
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 120.63
162 TestMultiControlPlane/serial/DeployApp 31.15
163 TestMultiControlPlane/serial/PingHostFromPods 1.86
164 TestMultiControlPlane/serial/AddWorkerNode 24.12
165 TestMultiControlPlane/serial/NodeLabels 0.12
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.02
167 TestMultiControlPlane/serial/CopyFile 19.99
168 TestMultiControlPlane/serial/StopSecondaryNode 12.9
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.87
170 TestMultiControlPlane/serial/RestartSecondaryNode 20.19
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.06
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 134.72
173 TestMultiControlPlane/serial/DeleteSecondaryNode 10.08
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.76
175 TestMultiControlPlane/serial/StopCluster 36.02
176 TestMultiControlPlane/serial/RestartCluster 81.53
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.8
178 TestMultiControlPlane/serial/AddSecondaryNode 42.25
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.06
183 TestJSONOutput/start/Command 66.8
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.74
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.72
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 6.08
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.25
208 TestKicCustomNetwork/create_custom_network 39.52
209 TestKicCustomNetwork/use_default_bridge_network 33.91
210 TestKicExistingNetwork 31.57
211 TestKicCustomSubnet 36.11
212 TestKicStaticIP 36.76
213 TestMainNoArgs 0.06
214 TestMinikubeProfile 72.11
217 TestMountStart/serial/StartWithMountFirst 9.12
218 TestMountStart/serial/VerifyMountFirst 0.26
219 TestMountStart/serial/StartWithMountSecond 6.21
220 TestMountStart/serial/VerifyMountSecond 0.28
221 TestMountStart/serial/DeleteFirst 1.62
222 TestMountStart/serial/VerifyMountPostDelete 0.27
223 TestMountStart/serial/Stop 1.21
224 TestMountStart/serial/RestartStopped 7.4
225 TestMountStart/serial/VerifyMountPostStop 0.26
228 TestMultiNode/serial/FreshStart2Nodes 68
229 TestMultiNode/serial/DeployApp2Nodes 16.25
230 TestMultiNode/serial/PingHostFrom2Pods 1.09
231 TestMultiNode/serial/AddNode 15.32
232 TestMultiNode/serial/MultiNodeLabels 0.1
233 TestMultiNode/serial/ProfileList 0.69
234 TestMultiNode/serial/CopyFile 10.25
235 TestMultiNode/serial/StopNode 2.28
236 TestMultiNode/serial/StartAfterStop 9.72
237 TestMultiNode/serial/RestartKeepsNodes 131.37
238 TestMultiNode/serial/DeleteNode 5.83
239 TestMultiNode/serial/StopMultiNode 23.94
240 TestMultiNode/serial/RestartMultiNode 52.22
241 TestMultiNode/serial/ValidateNameConflict 33.8
246 TestPreload 114.36
248 TestScheduledStopUnix 105.78
251 TestInsufficientStorage 10.43
252 TestRunningBinaryUpgrade 93.67
254 TestKubernetesUpgrade 358.03
255 TestMissingContainerUpgrade 177.25
257 TestPause/serial/Start 62.85
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 40.56
261 TestNoKubernetes/serial/StartWithStopK8s 18
262 TestNoKubernetes/serial/Start 9.09
263 TestPause/serial/SecondStartNoReconfiguration 7.61
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.42
265 TestNoKubernetes/serial/ProfileList 1.17
266 TestNoKubernetes/serial/Stop 1.28
267 TestPause/serial/Pause 1.18
268 TestNoKubernetes/serial/StartNoArgs 7.27
269 TestPause/serial/VerifyStatus 0.48
270 TestPause/serial/Unpause 0.76
271 TestPause/serial/PauseAgain 0.88
272 TestPause/serial/DeletePaused 2.74
273 TestPause/serial/VerifyDeletedResources 0.48
274 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.42
275 TestStoppedBinaryUpgrade/Setup 0.59
276 TestStoppedBinaryUpgrade/Upgrade 106.22
277 TestStoppedBinaryUpgrade/MinikubeLogs 1.13
292 TestNetworkPlugins/group/false 4.88
297 TestStartStop/group/old-k8s-version/serial/FirstStart 172.72
299 TestStartStop/group/no-preload/serial/FirstStart 76.41
300 TestStartStop/group/old-k8s-version/serial/DeployApp 10.74
301 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.51
302 TestStartStop/group/old-k8s-version/serial/Stop 13.01
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.25
305 TestStartStop/group/no-preload/serial/DeployApp 8.49
306 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.42
307 TestStartStop/group/no-preload/serial/Stop 12.12
308 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
309 TestStartStop/group/no-preload/serial/SecondStart 267.12
310 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
311 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
312 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
313 TestStartStop/group/no-preload/serial/Pause 3.37
315 TestStartStop/group/embed-certs/serial/FirstStart 67.83
316 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
317 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.12
318 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.35
319 TestStartStop/group/old-k8s-version/serial/Pause 3.73
321 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 62.46
322 TestStartStop/group/embed-certs/serial/DeployApp 10.4
323 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.22
324 TestStartStop/group/embed-certs/serial/Stop 12.02
325 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
326 TestStartStop/group/embed-certs/serial/SecondStart 280.37
327 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.52
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.53
329 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.29
330 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
331 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 267.9
332 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
333 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.14
334 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.35
335 TestStartStop/group/embed-certs/serial/Pause 3.43
337 TestStartStop/group/newest-cni/serial/FirstStart 39.13
338 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
339 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.12
340 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
341 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.18
342 TestNetworkPlugins/group/auto/Start 60.54
343 TestStartStop/group/newest-cni/serial/DeployApp 0
344 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.25
345 TestStartStop/group/newest-cni/serial/Stop 1.35
346 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.26
347 TestStartStop/group/newest-cni/serial/SecondStart 22.58
348 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
349 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
350 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.42
351 TestStartStop/group/newest-cni/serial/Pause 4.06
352 TestNetworkPlugins/group/kindnet/Start 71.99
353 TestNetworkPlugins/group/auto/KubeletFlags 0.38
354 TestNetworkPlugins/group/auto/NetCatPod 10.44
355 TestNetworkPlugins/group/auto/DNS 0.25
356 TestNetworkPlugins/group/auto/Localhost 0.19
357 TestNetworkPlugins/group/auto/HairPin 0.2
358 TestNetworkPlugins/group/calico/Start 68.69
359 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
360 TestNetworkPlugins/group/kindnet/KubeletFlags 0.33
361 TestNetworkPlugins/group/kindnet/NetCatPod 11.32
362 TestNetworkPlugins/group/kindnet/DNS 0.21
363 TestNetworkPlugins/group/kindnet/Localhost 0.17
364 TestNetworkPlugins/group/kindnet/HairPin 0.23
365 TestNetworkPlugins/group/custom-flannel/Start 57.52
366 TestNetworkPlugins/group/calico/ControllerPod 6.07
367 TestNetworkPlugins/group/calico/KubeletFlags 0.58
368 TestNetworkPlugins/group/calico/NetCatPod 11.37
369 TestNetworkPlugins/group/calico/DNS 0.25
370 TestNetworkPlugins/group/calico/Localhost 0.22
371 TestNetworkPlugins/group/calico/HairPin 0.24
372 TestNetworkPlugins/group/enable-default-cni/Start 72.08
373 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.43
374 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.35
375 TestNetworkPlugins/group/custom-flannel/DNS 0.24
376 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
377 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
378 TestNetworkPlugins/group/flannel/Start 53.88
379 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.38
380 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.35
381 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
382 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
383 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
384 TestNetworkPlugins/group/flannel/ControllerPod 6.01
385 TestNetworkPlugins/group/bridge/Start 50.58
386 TestNetworkPlugins/group/flannel/KubeletFlags 0.45
387 TestNetworkPlugins/group/flannel/NetCatPod 11.4
388 TestNetworkPlugins/group/flannel/DNS 0.26
389 TestNetworkPlugins/group/flannel/Localhost 0.17
390 TestNetworkPlugins/group/flannel/HairPin 0.24
391 TestNetworkPlugins/group/bridge/KubeletFlags 0.33
392 TestNetworkPlugins/group/bridge/NetCatPod 10.26
393 TestNetworkPlugins/group/bridge/DNS 0.18
394 TestNetworkPlugins/group/bridge/Localhost 0.15
395 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (6.77s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-933846 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-933846 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.772799607s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (6.77s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1209 10:36:43.257372  592080 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I1209 10:36:43.257454  592080 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20068-586689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-933846
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-933846: exit status 85 (76.765064ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-933846 | jenkins | v1.34.0 | 09 Dec 24 10:36 UTC |          |
	|         | -p download-only-933846        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 10:36:36
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.2 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 10:36:36.537424  592086 out.go:345] Setting OutFile to fd 1 ...
	I1209 10:36:36.537569  592086 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 10:36:36.537580  592086 out.go:358] Setting ErrFile to fd 2...
	I1209 10:36:36.537585  592086 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 10:36:36.537852  592086 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-586689/.minikube/bin
	W1209 10:36:36.538000  592086 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20068-586689/.minikube/config/config.json: open /home/jenkins/minikube-integration/20068-586689/.minikube/config/config.json: no such file or directory
	I1209 10:36:36.538397  592086 out.go:352] Setting JSON to true
	I1209 10:36:36.539267  592086 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11944,"bootTime":1733728653,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1209 10:36:36.539340  592086 start.go:139] virtualization:  
	I1209 10:36:36.543878  592086 out.go:97] [download-only-933846] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W1209 10:36:36.544247  592086 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20068-586689/.minikube/cache/preloaded-tarball: no such file or directory
	I1209 10:36:36.544297  592086 notify.go:220] Checking for updates...
	I1209 10:36:36.546426  592086 out.go:169] MINIKUBE_LOCATION=20068
	I1209 10:36:36.548529  592086 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 10:36:36.550608  592086 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20068-586689/kubeconfig
	I1209 10:36:36.552500  592086 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-586689/.minikube
	I1209 10:36:36.554213  592086 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1209 10:36:36.557621  592086 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1209 10:36:36.557948  592086 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 10:36:36.586947  592086 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1209 10:36:36.587052  592086 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 10:36:36.646384  592086 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-09 10:36:36.636576686 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1209 10:36:36.646502  592086 docker.go:318] overlay module found
	I1209 10:36:36.648421  592086 out.go:97] Using the docker driver based on user configuration
	I1209 10:36:36.648490  592086 start.go:297] selected driver: docker
	I1209 10:36:36.648506  592086 start.go:901] validating driver "docker" against <nil>
	I1209 10:36:36.648626  592086 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 10:36:36.705946  592086 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-09 10:36:36.697010462 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1209 10:36:36.706149  592086 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 10:36:36.706482  592086 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1209 10:36:36.706642  592086 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 10:36:36.708825  592086 out.go:169] Using Docker driver with root privileges
	I1209 10:36:36.710502  592086 cni.go:84] Creating CNI manager for ""
	I1209 10:36:36.710587  592086 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 10:36:36.710607  592086 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 10:36:36.710717  592086 start.go:340] cluster config:
	{Name:download-only-933846 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-933846 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 10:36:36.712742  592086 out.go:97] Starting "download-only-933846" primary control-plane node in "download-only-933846" cluster
	I1209 10:36:36.712797  592086 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1209 10:36:36.715011  592086 out.go:97] Pulling base image v0.0.45-1730888964-19917 ...
	I1209 10:36:36.715051  592086 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1209 10:36:36.715256  592086 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1209 10:36:36.730759  592086 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1209 10:36:36.730940  592086 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory
	I1209 10:36:36.731041  592086 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1209 10:36:36.780001  592086 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I1209 10:36:36.780032  592086 cache.go:56] Caching tarball of preloaded images
	I1209 10:36:36.780211  592086 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1209 10:36:36.782544  592086 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1209 10:36:36.782573  592086 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I1209 10:36:36.881299  592086 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/20068-586689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I1209 10:36:41.528966  592086 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I1209 10:36:41.529165  592086 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20068-586689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-933846 host does not exist
	  To start a cluster, run: "minikube start -p download-only-933846"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-933846
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (5.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-099730 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-099730 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.32331864s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (5.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1209 10:36:49.006794  592080 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
I1209 10:36:49.006850  592080 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20068-586689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-099730
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-099730: exit status 85 (79.140712ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-933846 | jenkins | v1.34.0 | 09 Dec 24 10:36 UTC |                     |
	|         | -p download-only-933846        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 09 Dec 24 10:36 UTC | 09 Dec 24 10:36 UTC |
	| delete  | -p download-only-933846        | download-only-933846 | jenkins | v1.34.0 | 09 Dec 24 10:36 UTC | 09 Dec 24 10:36 UTC |
	| start   | -o=json --download-only        | download-only-099730 | jenkins | v1.34.0 | 09 Dec 24 10:36 UTC |                     |
	|         | -p download-only-099730        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 10:36:43
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.2 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 10:36:43.735281  592287 out.go:345] Setting OutFile to fd 1 ...
	I1209 10:36:43.735486  592287 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 10:36:43.735513  592287 out.go:358] Setting ErrFile to fd 2...
	I1209 10:36:43.735535  592287 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 10:36:43.735825  592287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-586689/.minikube/bin
	I1209 10:36:43.736285  592287 out.go:352] Setting JSON to true
	I1209 10:36:43.737170  592287 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11951,"bootTime":1733728653,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1209 10:36:43.737271  592287 start.go:139] virtualization:  
	I1209 10:36:43.739851  592287 out.go:97] [download-only-099730] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1209 10:36:43.740103  592287 notify.go:220] Checking for updates...
	I1209 10:36:43.741791  592287 out.go:169] MINIKUBE_LOCATION=20068
	I1209 10:36:43.743750  592287 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 10:36:43.745513  592287 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20068-586689/kubeconfig
	I1209 10:36:43.747614  592287 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-586689/.minikube
	I1209 10:36:43.749610  592287 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1209 10:36:43.753832  592287 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1209 10:36:43.754118  592287 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 10:36:43.785583  592287 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1209 10:36:43.785689  592287 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 10:36:43.837884  592287 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-09 10:36:43.828790622 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1209 10:36:43.837997  592287 docker.go:318] overlay module found
	I1209 10:36:43.840368  592287 out.go:97] Using the docker driver based on user configuration
	I1209 10:36:43.840399  592287 start.go:297] selected driver: docker
	I1209 10:36:43.840407  592287 start.go:901] validating driver "docker" against <nil>
	I1209 10:36:43.840522  592287 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 10:36:43.890697  592287 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-09 10:36:43.882191481 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1209 10:36:43.890917  592287 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 10:36:43.891216  592287 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1209 10:36:43.891376  592287 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 10:36:43.893463  592287 out.go:169] Using Docker driver with root privileges
	I1209 10:36:43.895287  592287 cni.go:84] Creating CNI manager for ""
	I1209 10:36:43.895357  592287 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 10:36:43.895370  592287 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 10:36:43.895454  592287 start.go:340] cluster config:
	{Name:download-only-099730 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-099730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 10:36:43.897484  592287 out.go:97] Starting "download-only-099730" primary control-plane node in "download-only-099730" cluster
	I1209 10:36:43.897511  592287 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1209 10:36:43.899299  592287 out.go:97] Pulling base image v0.0.45-1730888964-19917 ...
	I1209 10:36:43.899325  592287 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
	I1209 10:36:43.899426  592287 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1209 10:36:43.914614  592287 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1209 10:36:43.914745  592287 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory
	I1209 10:36:43.914769  592287 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory, skipping pull
	I1209 10:36:43.914778  592287 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in cache, skipping pull
	I1209 10:36:43.914787  592287 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 as a tarball
	I1209 10:36:43.960136  592287 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4
	I1209 10:36:43.960177  592287 cache.go:56] Caching tarball of preloaded images
	I1209 10:36:43.960336  592287 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
	I1209 10:36:43.962575  592287 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1209 10:36:43.962604  592287 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4 ...
	I1209 10:36:44.056468  592287 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4?checksum=md5:5a1c96cd03f848c5b0e8fb66f315acd5 -> /home/jenkins/minikube-integration/20068-586689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4
	I1209 10:36:47.553510  592287 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4 ...
	I1209 10:36:47.553628  592287 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20068-586689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4 ...
	I1209 10:36:48.422300  592287 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on containerd
	I1209 10:36:48.422697  592287 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/download-only-099730/config.json ...
	I1209 10:36:48.422734  592287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/download-only-099730/config.json: {Name:mk60ffdf57d9feafde8660b5292196dfa3ef965a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:36:48.422930  592287 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
	I1209 10:36:48.423092  592287 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/20068-586689/.minikube/cache/linux/arm64/v1.31.2/kubectl
	
	
	* The control-plane node download-only-099730 host does not exist
	  To start a cluster, run: "minikube start -p download-only-099730"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-099730
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
I1209 10:36:50.314256  592080 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-878655 --alsologtostderr --binary-mirror http://127.0.0.1:43313 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-878655" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-878655
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-764596
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-764596: exit status 85 (76.623344ms)

                                                
                                                
-- stdout --
	* Profile "addons-764596" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-764596"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-764596
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-764596: exit status 85 (72.596152ms)

                                                
                                                
-- stdout --
	* Profile "addons-764596" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-764596"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (216.58s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-764596 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-764596 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m36.58129649s)
--- PASS: TestAddons/Setup (216.58s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.96s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:807: volcano-scheduler stabilized in 52.078929ms
addons_test.go:815: volcano-admission stabilized in 52.219295ms
addons_test.go:823: volcano-controller stabilized in 52.252149ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-6dpc4" [19274a14-12b4-4daa-88d8-6c4a7e52ffef] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004046653s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-bf624" [9f768517-6d5a-4529-9ff4-6a0e75db5318] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003994153s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-wt8ld" [20bede65-41b1-4dfe-8eaa-b822609a09ed] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003913301s
addons_test.go:842: (dbg) Run:  kubectl --context addons-764596 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-764596 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-764596 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [195912af-c4c0-44a1-84e2-f3c4e4bd5b9c] Pending
helpers_test.go:344: "test-job-nginx-0" [195912af-c4c0-44a1-84e2-f3c4e4bd5b9c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [195912af-c4c0-44a1-84e2-f3c4e4bd5b9c] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.004808829s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-764596 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-764596 addons disable volcano --alsologtostderr -v=1: (11.335523744s)
--- PASS: TestAddons/serial/Volcano (40.96s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-764596 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-764596 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.92s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-764596 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-764596 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [088b340f-a65b-44b2-900b-d213c9551ecb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [088b340f-a65b-44b2-900b-d213c9551ecb] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.004120381s
addons_test.go:633: (dbg) Run:  kubectl --context addons-764596 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-764596 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-764596 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-764596 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.92s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.35s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 3.693718ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-5cc95cd69-q5bzg" [c6448164-f7c6-4651-ac61-78b4b1dbb73d] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003513351s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-x5fhc" [3ef2452a-5d7c-4cc1-8634-88622dc2b951] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004061355s
addons_test.go:331: (dbg) Run:  kubectl --context addons-764596 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-764596 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-764596 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.317372256s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-764596 ip
2024/12/09 10:41:41 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-764596 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.35s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.5s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-764596 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-764596 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-764596 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [23327c87-f37f-48a7-b782-b62c84feee69] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [23327c87-f37f-48a7-b782-b62c84feee69] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.005619334s
I1209 10:42:34.973791  592080 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-764596 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-764596 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-764596 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-764596 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-764596 addons disable ingress-dns --alsologtostderr -v=1: (2.220587402s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-764596 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-764596 addons disable ingress --alsologtostderr -v=1: (8.126182378s)
--- PASS: TestAddons/parallel/Ingress (21.50s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.91s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-w96gp" [ae08a0c1-e6c1-4aeb-94f8-36cf6faa3c58] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005353999s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-764596 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-764596 addons disable inspektor-gadget --alsologtostderr -v=1: (5.904977409s)
--- PASS: TestAddons/parallel/InspektorGadget (10.91s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.84s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 3.114675ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-kprvw" [586e99d2-994e-46e0-a32e-7c9f0f8ecb41] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004266683s
addons_test.go:402: (dbg) Run:  kubectl --context addons-764596 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-764596 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.84s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.47s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1209 10:41:51.993660  592080 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1209 10:41:52.000003  592080 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1209 10:41:52.000038  592080 kapi.go:107] duration metric: took 9.763764ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 9.776671ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-764596 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764596 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-764596 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [2236f9d1-7773-4ce1-9ac6-f6fb21521486] Pending
helpers_test.go:344: "task-pv-pod" [2236f9d1-7773-4ce1-9ac6-f6fb21521486] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [2236f9d1-7773-4ce1-9ac6-f6fb21521486] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.004651779s
addons_test.go:511: (dbg) Run:  kubectl --context addons-764596 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-764596 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-764596 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-764596 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-764596 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-764596 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-764596 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [aadfb2b3-5582-4c41-b566-2841902a10ad] Pending
helpers_test.go:344: "task-pv-pod-restore" [aadfb2b3-5582-4c41-b566-2841902a10ad] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [aadfb2b3-5582-4c41-b566-2841902a10ad] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004869729s
addons_test.go:553: (dbg) Run:  kubectl --context addons-764596 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-764596 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-764596 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-764596 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-764596 addons disable volumesnapshots --alsologtostderr -v=1: (1.161037752s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-764596 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-764596 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.98686041s)
--- PASS: TestAddons/parallel/CSI (52.47s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-764596 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-764596 --alsologtostderr -v=1: (1.73921309s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-cd8ffd6fc-xnnbc" [21d50436-2465-4f17-8442-94718991ad7b] Pending
helpers_test.go:344: "headlamp-cd8ffd6fc-xnnbc" [21d50436-2465-4f17-8442-94718991ad7b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-cd8ffd6fc-xnnbc" [21d50436-2465-4f17-8442-94718991ad7b] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.004132214s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-764596 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-764596 addons disable headlamp --alsologtostderr -v=1: (5.810586519s)
--- PASS: TestAddons/parallel/Headlamp (17.56s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.71s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-dc5db94f4-czn5k" [46d92af4-6f92-4256-9686-cdc5f4c3dbb8] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004731158s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-764596 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.71s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.94s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-764596 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-764596 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764596 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764596 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764596 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764596 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764596 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-764596 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [521c523a-250f-4367-aae7-0a2661c453ce] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [521c523a-250f-4367-aae7-0a2661c453ce] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [521c523a-250f-4367-aae7-0a2661c453ce] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004066164s
addons_test.go:906: (dbg) Run:  kubectl --context addons-764596 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-arm64 -p addons-764596 ssh "cat /opt/local-path-provisioner/pvc-88a41dc0-4f18-44bf-b683-a35dabc8b3b8_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-764596 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-764596 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-764596 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (9.94s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.67s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-f74hk" [31293a19-8634-4874-91c6-d3d0ded9f848] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004440804s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-764596 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.67s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-nkvzk" [1e776e4c-da93-4ca4-a227-ea35174491f9] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003914314s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-764596 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-764596 addons disable yakd --alsologtostderr -v=1: (5.856370775s)
--- PASS: TestAddons/parallel/Yakd (11.86s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.27s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-764596
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-764596: (11.977731603s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-764596
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-764596
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-764596
--- PASS: TestAddons/StoppedEnableDisable (12.27s)

                                                
                                    
x
+
TestCertOptions (37.89s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-724611 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
E1209 11:21:40.520325  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/functional-995264/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-724611 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (35.11512624s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-724611 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-724611 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-724611 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-724611" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-724611
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-724611: (2.069121751s)
--- PASS: TestCertOptions (37.89s)

                                                
                                    
x
+
TestCertExpiration (233.61s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-528742 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-528742 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (43.247595395s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-528742 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
E1209 11:24:43.583874  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/functional-995264/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-528742 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.98176802s)
helpers_test.go:175: Cleaning up "cert-expiration-528742" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-528742
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-528742: (2.376233218s)
--- PASS: TestCertExpiration (233.61s)

                                                
                                    
x
+
TestForceSystemdFlag (50.05s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-900483 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-900483 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (47.605456026s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-900483 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-900483" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-900483
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-900483: (2.117785693s)
--- PASS: TestForceSystemdFlag (50.05s)

                                                
                                    
x
+
TestForceSystemdEnv (42.62s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-377461 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-377461 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (39.458887498s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-377461 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-377461" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-377461
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-377461: (2.664664174s)
--- PASS: TestForceSystemdEnv (42.62s)

                                                
                                    
x
+
TestDockerEnvContainerd (46.66s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-824690 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-824690 --driver=docker  --container-runtime=containerd: (30.953811503s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-824690"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-N6EabfllIAHQ/agent.612658" SSH_AGENT_PID="612659" DOCKER_HOST=ssh://docker@127.0.0.1:33510 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-N6EabfllIAHQ/agent.612658" SSH_AGENT_PID="612659" DOCKER_HOST=ssh://docker@127.0.0.1:33510 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-N6EabfllIAHQ/agent.612658" SSH_AGENT_PID="612659" DOCKER_HOST=ssh://docker@127.0.0.1:33510 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.181187637s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-N6EabfllIAHQ/agent.612658" SSH_AGENT_PID="612659" DOCKER_HOST=ssh://docker@127.0.0.1:33510 docker image ls"
docker_test.go:250: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-N6EabfllIAHQ/agent.612658" SSH_AGENT_PID="612659" DOCKER_HOST=ssh://docker@127.0.0.1:33510 docker image ls": (1.025588581s)
helpers_test.go:175: Cleaning up "dockerenv-824690" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-824690
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-824690: (2.006794504s)
--- PASS: TestDockerEnvContainerd (46.66s)

                                                
                                    
x
+
TestErrorSpam/setup (29.43s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-186432 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-186432 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-186432 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-186432 --driver=docker  --container-runtime=containerd: (29.433766625s)
--- PASS: TestErrorSpam/setup (29.43s)

                                                
                                    
x
+
TestErrorSpam/start (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-186432 --log_dir /tmp/nospam-186432 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-186432 --log_dir /tmp/nospam-186432 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-186432 --log_dir /tmp/nospam-186432 start --dry-run
--- PASS: TestErrorSpam/start (0.73s)

                                                
                                    
x
+
TestErrorSpam/status (1.08s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-186432 --log_dir /tmp/nospam-186432 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-186432 --log_dir /tmp/nospam-186432 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-186432 --log_dir /tmp/nospam-186432 status
--- PASS: TestErrorSpam/status (1.08s)

                                                
                                    
x
+
TestErrorSpam/pause (1.85s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-186432 --log_dir /tmp/nospam-186432 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-186432 --log_dir /tmp/nospam-186432 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-186432 --log_dir /tmp/nospam-186432 pause
--- PASS: TestErrorSpam/pause (1.85s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.9s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-186432 --log_dir /tmp/nospam-186432 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-186432 --log_dir /tmp/nospam-186432 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-186432 --log_dir /tmp/nospam-186432 unpause
--- PASS: TestErrorSpam/unpause (1.90s)

                                                
                                    
x
+
TestErrorSpam/stop (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-186432 --log_dir /tmp/nospam-186432 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-186432 --log_dir /tmp/nospam-186432 stop: (1.278229333s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-186432 --log_dir /tmp/nospam-186432 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-186432 --log_dir /tmp/nospam-186432 stop
--- PASS: TestErrorSpam/stop (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20068-586689/.minikube/files/etc/test/nested/copy/592080/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (51.54s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-995264 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-995264 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (51.53606477s)
--- PASS: TestFunctional/serial/StartWithProxy (51.54s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.84s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1209 10:45:26.624524  592080 config.go:182] Loaded profile config "functional-995264": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-995264 --alsologtostderr -v=8
E1209 10:45:27.568984  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/addons-764596/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:45:27.575590  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/addons-764596/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:45:27.587014  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/addons-764596/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:45:27.608933  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/addons-764596/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:45:27.650416  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/addons-764596/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:45:27.731921  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/addons-764596/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:45:27.893361  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/addons-764596/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:45:28.215035  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/addons-764596/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:45:28.856291  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/addons-764596/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:45:30.138275  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/addons-764596/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:45:32.700656  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/addons-764596/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-995264 --alsologtostderr -v=8: (6.834509841s)
functional_test.go:663: soft start took 6.835738797s for "functional-995264" cluster.
I1209 10:45:33.459378  592080 config.go:182] Loaded profile config "functional-995264": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (6.84s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-995264 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-995264 cache add registry.k8s.io/pause:3.1: (1.637449511s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-995264 cache add registry.k8s.io/pause:3.3: (1.371284732s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 cache add registry.k8s.io/pause:latest
E1209 10:45:37.822090  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/addons-764596/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-995264 cache add registry.k8s.io/pause:latest: (1.234564402s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-995264 /tmp/TestFunctionalserialCacheCmdcacheadd_local1613450218/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 cache add minikube-local-cache-test:functional-995264
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 cache delete minikube-local-cache-test:functional-995264
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-995264
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-995264 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (316.386146ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-995264 cache reload: (1.149443524s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 kubectl -- --context functional-995264 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-995264 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (47.96s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-995264 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1209 10:45:48.064514  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/addons-764596/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:46:08.545939  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/addons-764596/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-995264 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (47.949451353s)
functional_test.go:761: restart took 47.94957265s for "functional-995264" cluster.
I1209 10:46:30.112633  592080 config.go:182] Loaded profile config "functional-995264": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (47.96s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-995264 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.12s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-995264 logs: (1.73844939s)
--- PASS: TestFunctional/serial/LogsCmd (1.74s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 logs --file /tmp/TestFunctionalserialLogsFileCmd195953226/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-995264 logs --file /tmp/TestFunctionalserialLogsFileCmd195953226/001/logs.txt: (1.726607683s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.73s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.93s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-995264 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-995264
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-995264: exit status 115 (396.779281ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30214 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-995264 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-995264 delete -f testdata/invalidsvc.yaml: (1.274134888s)
--- PASS: TestFunctional/serial/InvalidService (4.93s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-995264 config get cpus: exit status 14 (90.171572ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-995264 config get cpus: exit status 14 (85.707775ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-995264 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-995264 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 627555: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.62s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-995264 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-995264 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (237.209652ms)

                                                
                                                
-- stdout --
	* [functional-995264] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20068-586689/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-586689/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 10:47:12.450242  627255 out.go:345] Setting OutFile to fd 1 ...
	I1209 10:47:12.450434  627255 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 10:47:12.450465  627255 out.go:358] Setting ErrFile to fd 2...
	I1209 10:47:12.450487  627255 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 10:47:12.450870  627255 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-586689/.minikube/bin
	I1209 10:47:12.451370  627255 out.go:352] Setting JSON to false
	I1209 10:47:12.453006  627255 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":12580,"bootTime":1733728653,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1209 10:47:12.453089  627255 start.go:139] virtualization:  
	I1209 10:47:12.457577  627255 out.go:177] * [functional-995264] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1209 10:47:12.460189  627255 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 10:47:12.460233  627255 notify.go:220] Checking for updates...
	I1209 10:47:12.463653  627255 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 10:47:12.466727  627255 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20068-586689/kubeconfig
	I1209 10:47:12.468577  627255 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-586689/.minikube
	I1209 10:47:12.470901  627255 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 10:47:12.474083  627255 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 10:47:12.476668  627255 config.go:182] Loaded profile config "functional-995264": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1209 10:47:12.477317  627255 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 10:47:12.513210  627255 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1209 10:47:12.513371  627255 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 10:47:12.601508  627255 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-09 10:47:12.583869625 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1209 10:47:12.601624  627255 docker.go:318] overlay module found
	I1209 10:47:12.604640  627255 out.go:177] * Using the docker driver based on existing profile
	I1209 10:47:12.606654  627255 start.go:297] selected driver: docker
	I1209 10:47:12.606680  627255 start.go:901] validating driver "docker" against &{Name:functional-995264 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-995264 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 10:47:12.606798  627255 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 10:47:12.609313  627255 out.go:201] 
	W1209 10:47:12.611482  627255 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1209 10:47:12.613895  627255 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-995264 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-995264 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-995264 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (208.632826ms)

                                                
                                                
-- stdout --
	* [functional-995264] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20068-586689/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-586689/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 10:47:12.238973  627209 out.go:345] Setting OutFile to fd 1 ...
	I1209 10:47:12.239154  627209 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 10:47:12.239165  627209 out.go:358] Setting ErrFile to fd 2...
	I1209 10:47:12.239171  627209 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 10:47:12.241092  627209 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-586689/.minikube/bin
	I1209 10:47:12.243467  627209 out.go:352] Setting JSON to false
	I1209 10:47:12.245476  627209 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":12580,"bootTime":1733728653,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1209 10:47:12.245556  627209 start.go:139] virtualization:  
	I1209 10:47:12.248642  627209 out.go:177] * [functional-995264] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I1209 10:47:12.251443  627209 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 10:47:12.251558  627209 notify.go:220] Checking for updates...
	I1209 10:47:12.255947  627209 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 10:47:12.258176  627209 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20068-586689/kubeconfig
	I1209 10:47:12.260108  627209 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-586689/.minikube
	I1209 10:47:12.262241  627209 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 10:47:12.264101  627209 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 10:47:12.266759  627209 config.go:182] Loaded profile config "functional-995264": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1209 10:47:12.267435  627209 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 10:47:12.307623  627209 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1209 10:47:12.307753  627209 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 10:47:12.360269  627209 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-09 10:47:12.350216276 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1209 10:47:12.360379  627209 docker.go:318] overlay module found
	I1209 10:47:12.365021  627209 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1209 10:47:12.367082  627209 start.go:297] selected driver: docker
	I1209 10:47:12.367106  627209 start.go:901] validating driver "docker" against &{Name:functional-995264 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-995264 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 10:47:12.367254  627209 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 10:47:12.370863  627209 out.go:201] 
	W1209 10:47:12.373588  627209 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1209 10:47:12.375967  627209 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-995264 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-995264 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-nwj6l" [bef93371-e970-462c-9af8-421fbe753bac] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-nwj6l" [bef93371-e970-462c-9af8-421fbe753bac] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.004543595s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30863
functional_test.go:1675: http://192.168.49.2:30863: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-nwj6l

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30863
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.94s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [eb38a39d-6feb-419a-a5ba-dec2f60bf511] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004650484s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-995264 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-995264 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-995264 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-995264 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a3d39b0b-3ea0-4121-8f9c-be6ac2357c45] Pending
helpers_test.go:344: "sp-pod" [a3d39b0b-3ea0-4121-8f9c-be6ac2357c45] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1209 10:46:49.507813  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/addons-764596/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [a3d39b0b-3ea0-4121-8f9c-be6ac2357c45] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003808694s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-995264 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-995264 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-995264 delete -f testdata/storage-provisioner/pod.yaml: (1.211353417s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-995264 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9af4cba2-aca6-4ecc-807c-409cf8a7ca70] Pending
helpers_test.go:344: "sp-pod" [9af4cba2-aca6-4ecc-807c-409cf8a7ca70] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9af4cba2-aca6-4ecc-807c-409cf8a7ca70] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003484057s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-995264 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.21s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 ssh -n functional-995264 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 cp functional-995264:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd778743919/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 ssh -n functional-995264 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 ssh -n functional-995264 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.45s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/592080/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 ssh "sudo cat /etc/test/nested/copy/592080/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/592080.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 ssh "sudo cat /etc/ssl/certs/592080.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/592080.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 ssh "sudo cat /usr/share/ca-certificates/592080.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/5920802.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 ssh "sudo cat /etc/ssl/certs/5920802.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/5920802.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 ssh "sudo cat /usr/share/ca-certificates/5920802.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.08s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-995264 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-995264 ssh "sudo systemctl is-active docker": exit status 1 (293.04466ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-995264 ssh "sudo systemctl is-active crio": exit status 1 (281.217097ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-995264 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-995264 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-995264 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-995264 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 624678: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-995264 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-995264 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [fe68e4b0-c0ae-4e36-8ea1-39550fbb15a2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [fe68e4b0-c0ae-4e36-8ea1-39550fbb15a2] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004023741s
I1209 10:46:49.999624  592080 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-995264 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.43.99 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-995264 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-995264 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-995264 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-m27mf" [7a91f3dc-7286-4c6e-a248-246708d2bff4] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-m27mf" [7a91f3dc-7286-4c6e-a248-246708d2bff4] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004701308s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "459.094824ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "68.865924ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "407.709548ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "65.351738ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 service list -o json
functional_test.go:1494: Took "637.239061ms" to run "out/minikube-linux-arm64 -p functional-995264 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-995264 /tmp/TestFunctionalparallelMountCmdany-port1209301468/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1733741228768939325" to /tmp/TestFunctionalparallelMountCmdany-port1209301468/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1733741228768939325" to /tmp/TestFunctionalparallelMountCmdany-port1209301468/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1733741228768939325" to /tmp/TestFunctionalparallelMountCmdany-port1209301468/001/test-1733741228768939325
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  9 10:47 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  9 10:47 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  9 10:47 test-1733741228768939325
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 ssh cat /mount-9p/test-1733741228768939325
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-995264 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [d5475c54-25aa-4187-854a-0cb5f3692cc8] Pending
helpers_test.go:344: "busybox-mount" [d5475c54-25aa-4187-854a-0cb5f3692cc8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [d5475c54-25aa-4187-854a-0cb5f3692cc8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [d5475c54-25aa-4187-854a-0cb5f3692cc8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004229156s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-995264 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-995264 /tmp/TestFunctionalparallelMountCmdany-port1209301468/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:32299
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:32299
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-995264 /tmp/TestFunctionalparallelMountCmdspecific-port1060761654/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-995264 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (389.725382ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 10:47:17.243034  592080 retry.go:31] will retry after 451.674864ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-995264 /tmp/TestFunctionalparallelMountCmdspecific-port1060761654/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-995264 ssh "sudo umount -f /mount-9p": exit status 1 (364.349888ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-995264 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-995264 /tmp/TestFunctionalparallelMountCmdspecific-port1060761654/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-995264 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2729628177/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-995264 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2729628177/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-995264 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2729628177/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-995264 ssh "findmnt -T" /mount1: exit status 1 (1.021632066s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 10:47:20.008794  592080 retry.go:31] will retry after 285.445073ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-995264 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-995264 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2729628177/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-995264 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2729628177/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-995264 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2729628177/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.47s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-995264 version -o=json --components: (1.36994668s)
--- PASS: TestFunctional/parallel/Version/components (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-995264 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-995264
docker.io/kindest/kindnetd:v20241108-5c6d2daf
docker.io/kindest/kindnetd:v20241007-36f62932
docker.io/kicbase/echo-server:functional-995264
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-995264 image ls --format short --alsologtostderr:
I1209 10:47:30.529814  630121 out.go:345] Setting OutFile to fd 1 ...
I1209 10:47:30.530426  630121 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 10:47:30.530441  630121 out.go:358] Setting ErrFile to fd 2...
I1209 10:47:30.530452  630121 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 10:47:30.530870  630121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-586689/.minikube/bin
I1209 10:47:30.531825  630121 config.go:182] Loaded profile config "functional-995264": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1209 10:47:30.532032  630121 config.go:182] Loaded profile config "functional-995264": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1209 10:47:30.532707  630121 cli_runner.go:164] Run: docker container inspect functional-995264 --format={{.State.Status}}
I1209 10:47:30.557322  630121 ssh_runner.go:195] Run: systemctl --version
I1209 10:47:30.557394  630121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-995264
I1209 10:47:30.585407  630121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/functional-995264/id_rsa Username:docker}
I1209 10:47:30.678332  630121 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-995264 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-apiserver              | v1.31.2            | sha256:f9c264 | 25.6MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| docker.io/library/nginx                     | latest             | sha256:bdf62f | 68.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:2f6c96 | 16.9MB |
| registry.k8s.io/kube-proxy                  | v1.31.2            | sha256:021d24 | 26.8MB |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/kicbase/echo-server               | functional-995264  | sha256:ce2d2c | 2.17MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| docker.io/library/minikube-local-cache-test | functional-995264  | sha256:50c1c4 | 991B   |
| docker.io/library/nginx                     | alpine             | sha256:dba92e | 24.3MB |
| registry.k8s.io/kube-controller-manager     | v1.31.2            | sha256:9404ae | 23.9MB |
| registry.k8s.io/kube-scheduler              | v1.31.2            | sha256:d6b061 | 18.4MB |
| docker.io/kindest/kindnetd                  | v20241007-36f62932 | sha256:0bcd66 | 35.3MB |
| docker.io/kindest/kindnetd                  | v20241108-5c6d2daf | sha256:2be0bc | 35.3MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-995264 image ls --format table --alsologtostderr:
I1209 10:47:31.372313  630363 out.go:345] Setting OutFile to fd 1 ...
I1209 10:47:31.372437  630363 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 10:47:31.372448  630363 out.go:358] Setting ErrFile to fd 2...
I1209 10:47:31.372452  630363 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 10:47:31.372815  630363 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-586689/.minikube/bin
I1209 10:47:31.373978  630363 config.go:182] Loaded profile config "functional-995264": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1209 10:47:31.374174  630363 config.go:182] Loaded profile config "functional-995264": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1209 10:47:31.374695  630363 cli_runner.go:164] Run: docker container inspect functional-995264 --format={{.State.Status}}
I1209 10:47:31.406816  630363 ssh_runner.go:195] Run: systemctl --version
I1209 10:47:31.406877  630363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-995264
I1209 10:47:31.430689  630363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/functional-995264/id_rsa Username:docker}
I1209 10:47:31.517920  630363 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-995264 image ls --format json --alsologtostderr:
[{"id":"sha256:2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903","repoDigests":["docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"35310383"},{"id":"sha256:bdf62fd3a32f1209270ede068b6e08450dfe125c79b1a8ba8f5685090023bf7f","repoDigests":["docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be"],"repoTags":["docker.io/library/nginx:latest"],"size":"68524740"},{"id":"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"16948420"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e0
0b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:0bcd66b03df5f1498fba5b90226939f5993cfba4c8379438bd8e89f3b4a70baa","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"35320503"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba","repoDigests":["registry.k8s.i
o/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"26768683"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests
":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"25612805"},{"id":"sha256:9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"23872272"},{"id":"sha256:d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"18429679"},{"id
":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-995264"],"size":"2173567"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:50c1c4d9c9e530c3f4586a38be70c7397f09d9ba4a5fadd1c3a6c9af319ff525","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-995264"],"size":"991"},{"id":"sha256:dba92e6b6488643fe4f2e872e6e4f6c30948171890d0f2cb96f28c435352397f","repoDigests":["docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4"],"repoTags":["docker.io/library/nginx:alpine"],"size":"24250568"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-995264 image ls --format json --alsologtostderr:
I1209 10:47:31.104184  630282 out.go:345] Setting OutFile to fd 1 ...
I1209 10:47:31.104404  630282 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 10:47:31.104431  630282 out.go:358] Setting ErrFile to fd 2...
I1209 10:47:31.104453  630282 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 10:47:31.104826  630282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-586689/.minikube/bin
I1209 10:47:31.105836  630282 config.go:182] Loaded profile config "functional-995264": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1209 10:47:31.106158  630282 config.go:182] Loaded profile config "functional-995264": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1209 10:47:31.106747  630282 cli_runner.go:164] Run: docker container inspect functional-995264 --format={{.State.Status}}
I1209 10:47:31.127201  630282 ssh_runner.go:195] Run: systemctl --version
I1209 10:47:31.127258  630282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-995264
I1209 10:47:31.152211  630282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/functional-995264/id_rsa Username:docker}
I1209 10:47:31.242710  630282 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-995264 image ls --format yaml --alsologtostderr:
- id: sha256:0bcd66b03df5f1498fba5b90226939f5993cfba4c8379438bd8e89f3b4a70baa
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "35320503"
- id: sha256:50c1c4d9c9e530c3f4586a38be70c7397f09d9ba4a5fadd1c3a6c9af319ff525
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-995264
size: "991"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:bdf62fd3a32f1209270ede068b6e08450dfe125c79b1a8ba8f5685090023bf7f
repoDigests:
- docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be
repoTags:
- docker.io/library/nginx:latest
size: "68524740"
- id: sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "16948420"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "25612805"
- id: sha256:9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "23872272"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-995264
size: "2173567"
- id: sha256:dba92e6b6488643fe4f2e872e6e4f6c30948171890d0f2cb96f28c435352397f
repoDigests:
- docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4
repoTags:
- docker.io/library/nginx:alpine
size: "24250568"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba
repoDigests:
- registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "26768683"
- id: sha256:d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "18429679"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903
repoDigests:
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "35310383"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-995264 image ls --format yaml --alsologtostderr:
I1209 10:47:30.826354  630205 out.go:345] Setting OutFile to fd 1 ...
I1209 10:47:30.826564  630205 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 10:47:30.826577  630205 out.go:358] Setting ErrFile to fd 2...
I1209 10:47:30.826583  630205 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 10:47:30.826881  630205 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-586689/.minikube/bin
I1209 10:47:30.827883  630205 config.go:182] Loaded profile config "functional-995264": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1209 10:47:30.828073  630205 config.go:182] Loaded profile config "functional-995264": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1209 10:47:30.828601  630205 cli_runner.go:164] Run: docker container inspect functional-995264 --format={{.State.Status}}
I1209 10:47:30.847941  630205 ssh_runner.go:195] Run: systemctl --version
I1209 10:47:30.848004  630205 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-995264
I1209 10:47:30.868270  630205 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/functional-995264/id_rsa Username:docker}
I1209 10:47:30.959030  630205 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-995264 ssh pgrep buildkitd: exit status 1 (379.152966ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 image build -t localhost/my-image:functional-995264 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-995264 image build -t localhost/my-image:functional-995264 testdata/build --alsologtostderr: (3.455737801s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-995264 image build -t localhost/my-image:functional-995264 testdata/build --alsologtostderr:
I1209 10:47:30.990279  630255 out.go:345] Setting OutFile to fd 1 ...
I1209 10:47:30.991009  630255 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 10:47:30.991044  630255 out.go:358] Setting ErrFile to fd 2...
I1209 10:47:30.991065  630255 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 10:47:30.991339  630255 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-586689/.minikube/bin
I1209 10:47:30.992027  630255 config.go:182] Loaded profile config "functional-995264": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1209 10:47:30.993959  630255 config.go:182] Loaded profile config "functional-995264": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1209 10:47:30.994573  630255 cli_runner.go:164] Run: docker container inspect functional-995264 --format={{.State.Status}}
I1209 10:47:31.016221  630255 ssh_runner.go:195] Run: systemctl --version
I1209 10:47:31.016273  630255 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-995264
I1209 10:47:31.046462  630255 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/functional-995264/id_rsa Username:docker}
I1209 10:47:31.134594  630255 build_images.go:161] Building image from path: /tmp/build.2615822618.tar
I1209 10:47:31.134683  630255 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1209 10:47:31.145106  630255 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2615822618.tar
I1209 10:47:31.150566  630255 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2615822618.tar: stat -c "%s %y" /var/lib/minikube/build/build.2615822618.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2615822618.tar': No such file or directory
I1209 10:47:31.150599  630255 ssh_runner.go:362] scp /tmp/build.2615822618.tar --> /var/lib/minikube/build/build.2615822618.tar (3072 bytes)
I1209 10:47:31.182753  630255 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2615822618
I1209 10:47:31.197312  630255 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2615822618 -xf /var/lib/minikube/build/build.2615822618.tar
I1209 10:47:31.208171  630255 containerd.go:394] Building image: /var/lib/minikube/build/build.2615822618
I1209 10:47:31.208261  630255 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2615822618 --local dockerfile=/var/lib/minikube/build/build.2615822618 --output type=image,name=localhost/my-image:functional-995264
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:a82354f4ce10163216a4ad01ee28cf0303e0752d7dd348ae2cf4c18a94c7dc29 0.0s done
#8 exporting config sha256:5c068d8f6ddf176d02b1f1c9a67a7d4db924a8c1e951a3a5f1f8bb7cdb002168 0.0s done
#8 naming to localhost/my-image:functional-995264 done
#8 DONE 0.2s
I1209 10:47:34.351279  630255 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2615822618 --local dockerfile=/var/lib/minikube/build/build.2615822618 --output type=image,name=localhost/my-image:functional-995264: (3.142992481s)
I1209 10:47:34.351355  630255 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2615822618
I1209 10:47:34.362903  630255 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2615822618.tar
I1209 10:47:34.373755  630255 build_images.go:217] Built localhost/my-image:functional-995264 from /tmp/build.2615822618.tar
I1209 10:47:34.373790  630255 build_images.go:133] succeeded building to: functional-995264
I1209 10:47:34.373795  630255 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-995264
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 image load --daemon kicbase/echo-server:functional-995264 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 image load --daemon kicbase/echo-server:functional-995264 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-995264
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 image load --daemon kicbase/echo-server:functional-995264 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 image save kicbase/echo-server:functional-995264 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 image rm kicbase/echo-server:functional-995264 --alsologtostderr
2024/12/09 10:47:27 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-995264
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-995264 image save --daemon kicbase/echo-server:functional-995264 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-995264
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.50s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-995264
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-995264
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-995264
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (120.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-030585 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E1209 10:48:11.430034  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/addons-764596/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-030585 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m59.772770759s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (120.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (31.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-030585 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-030585 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-030585 -- rollout status deployment/busybox: (27.856278015s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-030585 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-030585 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-030585 -- exec busybox-7dff88458-6z65b -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-030585 -- exec busybox-7dff88458-s49st -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-030585 -- exec busybox-7dff88458-wrtll -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-030585 -- exec busybox-7dff88458-6z65b -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-030585 -- exec busybox-7dff88458-s49st -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-030585 -- exec busybox-7dff88458-wrtll -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-030585 -- exec busybox-7dff88458-6z65b -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-030585 -- exec busybox-7dff88458-s49st -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-030585 -- exec busybox-7dff88458-wrtll -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (31.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-030585 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-030585 -- exec busybox-7dff88458-6z65b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-030585 -- exec busybox-7dff88458-6z65b -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-030585 -- exec busybox-7dff88458-s49st -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-030585 -- exec busybox-7dff88458-s49st -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-030585 -- exec busybox-7dff88458-wrtll -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-030585 -- exec busybox-7dff88458-wrtll -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-030585 -v=7 --alsologtostderr
E1209 10:50:27.568091  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/addons-764596/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-030585 -v=7 --alsologtostderr: (23.061603346s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-030585 status -v=7 --alsologtostderr: (1.057662158s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-030585 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.023987512s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 status --output json -v=7 --alsologtostderr
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-030585 status --output json -v=7 --alsologtostderr: (1.107245131s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 cp testdata/cp-test.txt ha-030585:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 ssh -n ha-030585 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 cp ha-030585:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile962377563/001/cp-test_ha-030585.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 ssh -n ha-030585 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 cp ha-030585:/home/docker/cp-test.txt ha-030585-m02:/home/docker/cp-test_ha-030585_ha-030585-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 ssh -n ha-030585 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 ssh -n ha-030585-m02 "sudo cat /home/docker/cp-test_ha-030585_ha-030585-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 cp ha-030585:/home/docker/cp-test.txt ha-030585-m03:/home/docker/cp-test_ha-030585_ha-030585-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 ssh -n ha-030585 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 ssh -n ha-030585-m03 "sudo cat /home/docker/cp-test_ha-030585_ha-030585-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 cp ha-030585:/home/docker/cp-test.txt ha-030585-m04:/home/docker/cp-test_ha-030585_ha-030585-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 ssh -n ha-030585 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 ssh -n ha-030585-m04 "sudo cat /home/docker/cp-test_ha-030585_ha-030585-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 cp testdata/cp-test.txt ha-030585-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 ssh -n ha-030585-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 cp ha-030585-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile962377563/001/cp-test_ha-030585-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 ssh -n ha-030585-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 cp ha-030585-m02:/home/docker/cp-test.txt ha-030585:/home/docker/cp-test_ha-030585-m02_ha-030585.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 ssh -n ha-030585-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 ssh -n ha-030585 "sudo cat /home/docker/cp-test_ha-030585-m02_ha-030585.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 cp ha-030585-m02:/home/docker/cp-test.txt ha-030585-m03:/home/docker/cp-test_ha-030585-m02_ha-030585-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 ssh -n ha-030585-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 ssh -n ha-030585-m03 "sudo cat /home/docker/cp-test_ha-030585-m02_ha-030585-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 cp ha-030585-m02:/home/docker/cp-test.txt ha-030585-m04:/home/docker/cp-test_ha-030585-m02_ha-030585-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 ssh -n ha-030585-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 ssh -n ha-030585-m04 "sudo cat /home/docker/cp-test_ha-030585-m02_ha-030585-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 cp testdata/cp-test.txt ha-030585-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 ssh -n ha-030585-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 cp ha-030585-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile962377563/001/cp-test_ha-030585-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 ssh -n ha-030585-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 cp ha-030585-m03:/home/docker/cp-test.txt ha-030585:/home/docker/cp-test_ha-030585-m03_ha-030585.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 ssh -n ha-030585-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 ssh -n ha-030585 "sudo cat /home/docker/cp-test_ha-030585-m03_ha-030585.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 cp ha-030585-m03:/home/docker/cp-test.txt ha-030585-m02:/home/docker/cp-test_ha-030585-m03_ha-030585-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 ssh -n ha-030585-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 ssh -n ha-030585-m02 "sudo cat /home/docker/cp-test_ha-030585-m03_ha-030585-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 cp ha-030585-m03:/home/docker/cp-test.txt ha-030585-m04:/home/docker/cp-test_ha-030585-m03_ha-030585-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 ssh -n ha-030585-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 ssh -n ha-030585-m04 "sudo cat /home/docker/cp-test_ha-030585-m03_ha-030585-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 cp testdata/cp-test.txt ha-030585-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 ssh -n ha-030585-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 cp ha-030585-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile962377563/001/cp-test_ha-030585-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 ssh -n ha-030585-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 cp ha-030585-m04:/home/docker/cp-test.txt ha-030585:/home/docker/cp-test_ha-030585-m04_ha-030585.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 ssh -n ha-030585-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 ssh -n ha-030585 "sudo cat /home/docker/cp-test_ha-030585-m04_ha-030585.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 cp ha-030585-m04:/home/docker/cp-test.txt ha-030585-m02:/home/docker/cp-test_ha-030585-m04_ha-030585-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 ssh -n ha-030585-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 ssh -n ha-030585-m02 "sudo cat /home/docker/cp-test_ha-030585-m04_ha-030585-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 cp ha-030585-m04:/home/docker/cp-test.txt ha-030585-m03:/home/docker/cp-test_ha-030585-m04_ha-030585-m03.txt
E1209 10:50:55.272006  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/addons-764596/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 ssh -n ha-030585-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 ssh -n ha-030585-m03 "sudo cat /home/docker/cp-test_ha-030585-m04_ha-030585-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-030585 node stop m02 -v=7 --alsologtostderr: (12.127820012s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-030585 status -v=7 --alsologtostderr: exit status 7 (769.003106ms)

                                                
                                                
-- stdout --
	ha-030585
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-030585-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-030585-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-030585-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 10:51:08.501489  646506 out.go:345] Setting OutFile to fd 1 ...
	I1209 10:51:08.501640  646506 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 10:51:08.501685  646506 out.go:358] Setting ErrFile to fd 2...
	I1209 10:51:08.501698  646506 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 10:51:08.501986  646506 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-586689/.minikube/bin
	I1209 10:51:08.502202  646506 out.go:352] Setting JSON to false
	I1209 10:51:08.502241  646506 mustload.go:65] Loading cluster: ha-030585
	I1209 10:51:08.502388  646506 notify.go:220] Checking for updates...
	I1209 10:51:08.502679  646506 config.go:182] Loaded profile config "ha-030585": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1209 10:51:08.502704  646506 status.go:174] checking status of ha-030585 ...
	I1209 10:51:08.503560  646506 cli_runner.go:164] Run: docker container inspect ha-030585 --format={{.State.Status}}
	I1209 10:51:08.524556  646506 status.go:371] ha-030585 host status = "Running" (err=<nil>)
	I1209 10:51:08.524580  646506 host.go:66] Checking if "ha-030585" exists ...
	I1209 10:51:08.524996  646506 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-030585
	I1209 10:51:08.551294  646506 host.go:66] Checking if "ha-030585" exists ...
	I1209 10:51:08.551645  646506 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 10:51:08.551743  646506 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-030585
	I1209 10:51:08.572281  646506 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33525 SSHKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/ha-030585/id_rsa Username:docker}
	I1209 10:51:08.662578  646506 ssh_runner.go:195] Run: systemctl --version
	I1209 10:51:08.667732  646506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 10:51:08.692020  646506 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 10:51:08.758612  646506 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:71 SystemTime:2024-12-09 10:51:08.744828867 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1209 10:51:08.759254  646506 kubeconfig.go:125] found "ha-030585" server: "https://192.168.49.254:8443"
	I1209 10:51:08.759300  646506 api_server.go:166] Checking apiserver status ...
	I1209 10:51:08.759349  646506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 10:51:08.772473  646506 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1525/cgroup
	I1209 10:51:08.784311  646506 api_server.go:182] apiserver freezer: "3:freezer:/docker/04a8671ffdbdd57ac60c61a405be651c31c0489a0346d726d6a9a94b6ad9a5fc/kubepods/burstable/poddb7a83439b57769e70875740d6a6ad81/5cc35ab536e3e7b9d0d818529ed0b064e0bcd74b3704241636be9f67a085dc15"
	I1209 10:51:08.784384  646506 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/04a8671ffdbdd57ac60c61a405be651c31c0489a0346d726d6a9a94b6ad9a5fc/kubepods/burstable/poddb7a83439b57769e70875740d6a6ad81/5cc35ab536e3e7b9d0d818529ed0b064e0bcd74b3704241636be9f67a085dc15/freezer.state
	I1209 10:51:08.796228  646506 api_server.go:204] freezer state: "THAWED"
	I1209 10:51:08.796264  646506 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1209 10:51:08.804679  646506 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1209 10:51:08.804712  646506 status.go:463] ha-030585 apiserver status = Running (err=<nil>)
	I1209 10:51:08.804731  646506 status.go:176] ha-030585 status: &{Name:ha-030585 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 10:51:08.804749  646506 status.go:174] checking status of ha-030585-m02 ...
	I1209 10:51:08.805067  646506 cli_runner.go:164] Run: docker container inspect ha-030585-m02 --format={{.State.Status}}
	I1209 10:51:08.822581  646506 status.go:371] ha-030585-m02 host status = "Stopped" (err=<nil>)
	I1209 10:51:08.822606  646506 status.go:384] host is not running, skipping remaining checks
	I1209 10:51:08.822614  646506 status.go:176] ha-030585-m02 status: &{Name:ha-030585-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 10:51:08.822635  646506 status.go:174] checking status of ha-030585-m03 ...
	I1209 10:51:08.822940  646506 cli_runner.go:164] Run: docker container inspect ha-030585-m03 --format={{.State.Status}}
	I1209 10:51:08.840716  646506 status.go:371] ha-030585-m03 host status = "Running" (err=<nil>)
	I1209 10:51:08.840745  646506 host.go:66] Checking if "ha-030585-m03" exists ...
	I1209 10:51:08.841059  646506 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-030585-m03
	I1209 10:51:08.859349  646506 host.go:66] Checking if "ha-030585-m03" exists ...
	I1209 10:51:08.859654  646506 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 10:51:08.859703  646506 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-030585-m03
	I1209 10:51:08.878979  646506 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/ha-030585-m03/id_rsa Username:docker}
	I1209 10:51:08.970451  646506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 10:51:08.983859  646506 kubeconfig.go:125] found "ha-030585" server: "https://192.168.49.254:8443"
	I1209 10:51:08.983890  646506 api_server.go:166] Checking apiserver status ...
	I1209 10:51:08.983935  646506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 10:51:08.995652  646506 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1351/cgroup
	I1209 10:51:09.009476  646506 api_server.go:182] apiserver freezer: "3:freezer:/docker/e87ff36a764daf24805c547d05c97cc9cb829b0e836486f6681a9ad71fbdc8b9/kubepods/burstable/pod97d546fc08681edb35ce1af51f3c593f/af591d68d4c70df672c9cea04b2b022d2ab904de03344b12fd9c36522bcce3ba"
	I1209 10:51:09.009583  646506 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e87ff36a764daf24805c547d05c97cc9cb829b0e836486f6681a9ad71fbdc8b9/kubepods/burstable/pod97d546fc08681edb35ce1af51f3c593f/af591d68d4c70df672c9cea04b2b022d2ab904de03344b12fd9c36522bcce3ba/freezer.state
	I1209 10:51:09.020840  646506 api_server.go:204] freezer state: "THAWED"
	I1209 10:51:09.020876  646506 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1209 10:51:09.029559  646506 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1209 10:51:09.029604  646506 status.go:463] ha-030585-m03 apiserver status = Running (err=<nil>)
	I1209 10:51:09.029614  646506 status.go:176] ha-030585-m03 status: &{Name:ha-030585-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 10:51:09.029634  646506 status.go:174] checking status of ha-030585-m04 ...
	I1209 10:51:09.029955  646506 cli_runner.go:164] Run: docker container inspect ha-030585-m04 --format={{.State.Status}}
	I1209 10:51:09.048080  646506 status.go:371] ha-030585-m04 host status = "Running" (err=<nil>)
	I1209 10:51:09.048107  646506 host.go:66] Checking if "ha-030585-m04" exists ...
	I1209 10:51:09.048514  646506 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-030585-m04
	I1209 10:51:09.071385  646506 host.go:66] Checking if "ha-030585-m04" exists ...
	I1209 10:51:09.071702  646506 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 10:51:09.071743  646506 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-030585-m04
	I1209 10:51:09.097663  646506 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/ha-030585-m04/id_rsa Username:docker}
	I1209 10:51:09.190800  646506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 10:51:09.204026  646506 status.go:176] ha-030585-m04 status: &{Name:ha-030585-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (20.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-030585 node start m02 -v=7 --alsologtostderr: (18.893224264s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-030585 status -v=7 --alsologtostderr: (1.162756442s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (20.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.064559031s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (134.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-030585 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-030585 -v=7 --alsologtostderr
E1209 10:51:40.521376  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/functional-995264/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:51:40.527786  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/functional-995264/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:51:40.539306  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/functional-995264/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:51:40.560754  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/functional-995264/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:51:40.602101  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/functional-995264/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:51:40.683486  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/functional-995264/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:51:40.844990  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/functional-995264/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:51:41.166741  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/functional-995264/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:51:41.808713  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/functional-995264/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:51:43.090151  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/functional-995264/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:51:45.651570  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/functional-995264/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:51:50.772928  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/functional-995264/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:52:01.014159  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/functional-995264/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-030585 -v=7 --alsologtostderr: (37.087476378s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-030585 --wait=true -v=7 --alsologtostderr
E1209 10:52:21.495633  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/functional-995264/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:53:02.457292  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/functional-995264/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-030585 --wait=true -v=7 --alsologtostderr: (1m37.443224169s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-030585
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (134.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-030585 node delete m03 -v=7 --alsologtostderr: (9.076815349s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 stop -v=7 --alsologtostderr
E1209 10:54:24.378746  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/functional-995264/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-030585 stop -v=7 --alsologtostderr: (35.891458181s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-030585 status -v=7 --alsologtostderr: exit status 7 (126.15408ms)

                                                
                                                
-- stdout --
	ha-030585
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-030585-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-030585-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 10:54:32.843238  660816 out.go:345] Setting OutFile to fd 1 ...
	I1209 10:54:32.843382  660816 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 10:54:32.843393  660816 out.go:358] Setting ErrFile to fd 2...
	I1209 10:54:32.843398  660816 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 10:54:32.843695  660816 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-586689/.minikube/bin
	I1209 10:54:32.843901  660816 out.go:352] Setting JSON to false
	I1209 10:54:32.843937  660816 mustload.go:65] Loading cluster: ha-030585
	I1209 10:54:32.844039  660816 notify.go:220] Checking for updates...
	I1209 10:54:32.844377  660816 config.go:182] Loaded profile config "ha-030585": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1209 10:54:32.844394  660816 status.go:174] checking status of ha-030585 ...
	I1209 10:54:32.844911  660816 cli_runner.go:164] Run: docker container inspect ha-030585 --format={{.State.Status}}
	I1209 10:54:32.864164  660816 status.go:371] ha-030585 host status = "Stopped" (err=<nil>)
	I1209 10:54:32.864186  660816 status.go:384] host is not running, skipping remaining checks
	I1209 10:54:32.864193  660816 status.go:176] ha-030585 status: &{Name:ha-030585 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 10:54:32.864217  660816 status.go:174] checking status of ha-030585-m02 ...
	I1209 10:54:32.864515  660816 cli_runner.go:164] Run: docker container inspect ha-030585-m02 --format={{.State.Status}}
	I1209 10:54:32.893202  660816 status.go:371] ha-030585-m02 host status = "Stopped" (err=<nil>)
	I1209 10:54:32.893226  660816 status.go:384] host is not running, skipping remaining checks
	I1209 10:54:32.893273  660816 status.go:176] ha-030585-m02 status: &{Name:ha-030585-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 10:54:32.893292  660816 status.go:174] checking status of ha-030585-m04 ...
	I1209 10:54:32.893598  660816 cli_runner.go:164] Run: docker container inspect ha-030585-m04 --format={{.State.Status}}
	I1209 10:54:32.911043  660816 status.go:371] ha-030585-m04 host status = "Stopped" (err=<nil>)
	I1209 10:54:32.911070  660816 status.go:384] host is not running, skipping remaining checks
	I1209 10:54:32.911077  660816 status.go:176] ha-030585-m04 status: &{Name:ha-030585-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (81.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-030585 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E1209 10:55:27.567709  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/addons-764596/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-030585 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m20.553845096s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (81.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (42.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-030585 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-030585 --control-plane -v=7 --alsologtostderr: (41.287303023s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-030585 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (42.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.062525664s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.06s)

                                                
                                    
x
+
TestJSONOutput/start/Command (66.8s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-914580 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E1209 10:57:08.220260  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/functional-995264/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-914580 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m6.793682524s)
--- PASS: TestJSONOutput/start/Command (66.80s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-914580 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-914580 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.08s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-914580 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-914580 --output=json --user=testUser: (6.078861523s)
--- PASS: TestJSONOutput/stop/Command (6.08s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-425464 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-425464 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (84.338899ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e6df3ed9-d43d-46aa-a9be-d9c2c0868a1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-425464] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2b779894-742e-48c6-8fd5-b017ab2d2974","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20068"}}
	{"specversion":"1.0","id":"4fff97f7-ccf3-49f6-9da8-b88da5d0b724","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"517dbe8a-d7ec-435d-89dc-cbf95f120501","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20068-586689/kubeconfig"}}
	{"specversion":"1.0","id":"d816a18e-8120-4226-905c-21d73375f09b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-586689/.minikube"}}
	{"specversion":"1.0","id":"9eaed0e2-43f2-42f8-ad88-5c7f957081a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"5d7b7783-f5fe-4e89-82c0-8104de1a029a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a7f11e62-2d7b-4a13-aa0a-e93fa072f584","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-425464" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-425464
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.52s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-186440 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-186440 --network=: (37.224031834s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-186440" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-186440
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-186440: (2.269135047s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.52s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.91s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-770802 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-770802 --network=bridge: (31.835429645s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-770802" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-770802
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-770802: (2.038297429s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.91s)

                                                
                                    
x
+
TestKicExistingNetwork (31.57s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1209 10:59:19.542995  592080 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1209 10:59:19.560783  592080 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1209 10:59:19.560878  592080 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1209 10:59:19.561643  592080 cli_runner.go:164] Run: docker network inspect existing-network
W1209 10:59:19.578545  592080 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1209 10:59:19.578586  592080 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1209 10:59:19.578600  592080 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1209 10:59:19.578715  592080 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1209 10:59:19.596005  592080 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3f46af3becfa IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:b8:cb:97:e7} reservation:<nil>}
I1209 10:59:19.596872  592080 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001e039c0}
I1209 10:59:19.596906  592080 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1209 10:59:19.597422  592080 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1209 10:59:19.674596  592080 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-619018 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-619018 --network=existing-network: (29.372402479s)
helpers_test.go:175: Cleaning up "existing-network-619018" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-619018
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-619018: (2.034530057s)
I1209 10:59:51.098742  592080 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (31.57s)

                                                
                                    
x
+
TestKicCustomSubnet (36.11s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-200617 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-200617 --subnet=192.168.60.0/24: (33.945526183s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-200617 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-200617" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-200617
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-200617: (2.109676508s)
--- PASS: TestKicCustomSubnet (36.11s)

                                                
                                    
x
+
TestKicStaticIP (36.76s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-392218 --static-ip=192.168.200.200
E1209 11:00:27.567560  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/addons-764596/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-392218 --static-ip=192.168.200.200: (34.466131901s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-392218 ip
helpers_test.go:175: Cleaning up "static-ip-392218" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-392218
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-392218: (2.117512915s)
--- PASS: TestKicStaticIP (36.76s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (72.11s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-858892 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-858892 --driver=docker  --container-runtime=containerd: (33.72070015s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-862035 --driver=docker  --container-runtime=containerd
E1209 11:01:40.520930  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/functional-995264/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:01:50.633642  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/addons-764596/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-862035 --driver=docker  --container-runtime=containerd: (32.64444537s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-858892
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-862035
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-862035" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-862035
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-862035: (1.974970582s)
helpers_test.go:175: Cleaning up "first-858892" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-858892
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-858892: (2.31662634s)
--- PASS: TestMinikubeProfile (72.11s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.12s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-787985 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-787985 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.119573316s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-787985 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.21s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-789749 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-789749 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.208103553s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.21s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-789749 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-787985 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-787985 --alsologtostderr -v=5: (1.615585202s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-789749 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-789749
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-789749: (1.214825556s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.4s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-789749
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-789749: (6.396312035s)
--- PASS: TestMountStart/serial/RestartStopped (7.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-789749 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (68s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-425646 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-425646 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m7.428798235s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (68.00s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (16.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425646 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425646 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-425646 -- rollout status deployment/busybox: (14.183397262s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425646 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425646 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425646 -- exec busybox-7dff88458-bwm8w -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425646 -- exec busybox-7dff88458-vdsc7 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425646 -- exec busybox-7dff88458-bwm8w -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425646 -- exec busybox-7dff88458-vdsc7 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425646 -- exec busybox-7dff88458-bwm8w -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425646 -- exec busybox-7dff88458-vdsc7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (16.25s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425646 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425646 -- exec busybox-7dff88458-bwm8w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425646 -- exec busybox-7dff88458-bwm8w -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425646 -- exec busybox-7dff88458-vdsc7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425646 -- exec busybox-7dff88458-vdsc7 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (15.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-425646 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-425646 -v 3 --alsologtostderr: (14.588503875s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (15.32s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-425646 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 cp testdata/cp-test.txt multinode-425646:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 ssh -n multinode-425646 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 cp multinode-425646:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4283710715/001/cp-test_multinode-425646.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 ssh -n multinode-425646 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 cp multinode-425646:/home/docker/cp-test.txt multinode-425646-m02:/home/docker/cp-test_multinode-425646_multinode-425646-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 ssh -n multinode-425646 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 ssh -n multinode-425646-m02 "sudo cat /home/docker/cp-test_multinode-425646_multinode-425646-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 cp multinode-425646:/home/docker/cp-test.txt multinode-425646-m03:/home/docker/cp-test_multinode-425646_multinode-425646-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 ssh -n multinode-425646 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 ssh -n multinode-425646-m03 "sudo cat /home/docker/cp-test_multinode-425646_multinode-425646-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 cp testdata/cp-test.txt multinode-425646-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 ssh -n multinode-425646-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 cp multinode-425646-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4283710715/001/cp-test_multinode-425646-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 ssh -n multinode-425646-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 cp multinode-425646-m02:/home/docker/cp-test.txt multinode-425646:/home/docker/cp-test_multinode-425646-m02_multinode-425646.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 ssh -n multinode-425646-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 ssh -n multinode-425646 "sudo cat /home/docker/cp-test_multinode-425646-m02_multinode-425646.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 cp multinode-425646-m02:/home/docker/cp-test.txt multinode-425646-m03:/home/docker/cp-test_multinode-425646-m02_multinode-425646-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 ssh -n multinode-425646-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 ssh -n multinode-425646-m03 "sudo cat /home/docker/cp-test_multinode-425646-m02_multinode-425646-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 cp testdata/cp-test.txt multinode-425646-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 ssh -n multinode-425646-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 cp multinode-425646-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4283710715/001/cp-test_multinode-425646-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 ssh -n multinode-425646-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 cp multinode-425646-m03:/home/docker/cp-test.txt multinode-425646:/home/docker/cp-test_multinode-425646-m03_multinode-425646.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 ssh -n multinode-425646-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 ssh -n multinode-425646 "sudo cat /home/docker/cp-test_multinode-425646-m03_multinode-425646.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 cp multinode-425646-m03:/home/docker/cp-test.txt multinode-425646-m02:/home/docker/cp-test_multinode-425646-m03_multinode-425646-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 ssh -n multinode-425646-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 ssh -n multinode-425646-m02 "sudo cat /home/docker/cp-test_multinode-425646-m03_multinode-425646-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-425646 node stop m03: (1.228136683s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-425646 status: exit status 7 (534.598645ms)

                                                
                                                
-- stdout --
	multinode-425646
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-425646-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-425646-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-425646 status --alsologtostderr: exit status 7 (517.635791ms)

                                                
                                                
-- stdout --
	multinode-425646
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-425646-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-425646-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 11:04:38.100150  714577 out.go:345] Setting OutFile to fd 1 ...
	I1209 11:04:38.100355  714577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:04:38.100365  714577 out.go:358] Setting ErrFile to fd 2...
	I1209 11:04:38.100372  714577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:04:38.100635  714577 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-586689/.minikube/bin
	I1209 11:04:38.100841  714577 out.go:352] Setting JSON to false
	I1209 11:04:38.100880  714577 mustload.go:65] Loading cluster: multinode-425646
	I1209 11:04:38.100948  714577 notify.go:220] Checking for updates...
	I1209 11:04:38.102361  714577 config.go:182] Loaded profile config "multinode-425646": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1209 11:04:38.102398  714577 status.go:174] checking status of multinode-425646 ...
	I1209 11:04:38.103079  714577 cli_runner.go:164] Run: docker container inspect multinode-425646 --format={{.State.Status}}
	I1209 11:04:38.121269  714577 status.go:371] multinode-425646 host status = "Running" (err=<nil>)
	I1209 11:04:38.121297  714577 host.go:66] Checking if "multinode-425646" exists ...
	I1209 11:04:38.121608  714577 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-425646
	I1209 11:04:38.141027  714577 host.go:66] Checking if "multinode-425646" exists ...
	I1209 11:04:38.141409  714577 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 11:04:38.141462  714577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-425646
	I1209 11:04:38.165234  714577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33645 SSHKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/multinode-425646/id_rsa Username:docker}
	I1209 11:04:38.255471  714577 ssh_runner.go:195] Run: systemctl --version
	I1209 11:04:38.260953  714577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:04:38.273686  714577 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 11:04:38.332150  714577 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-12-09 11:04:38.322004536 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1209 11:04:38.332765  714577 kubeconfig.go:125] found "multinode-425646" server: "https://192.168.67.2:8443"
	I1209 11:04:38.332813  714577 api_server.go:166] Checking apiserver status ...
	I1209 11:04:38.332867  714577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:04:38.345065  714577 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1457/cgroup
	I1209 11:04:38.355515  714577 api_server.go:182] apiserver freezer: "3:freezer:/docker/229163ef408786b9667e9e49fb2089948fbacff464a1658306b08b6adba6877f/kubepods/burstable/podc56a4a2a7419d77626173f05322848e8/0dca6426c52c3c4a988e5f7f1a5820a48b1d35bcdfe03d5fef8f48a70c86e4ff"
	I1209 11:04:38.355600  714577 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/229163ef408786b9667e9e49fb2089948fbacff464a1658306b08b6adba6877f/kubepods/burstable/podc56a4a2a7419d77626173f05322848e8/0dca6426c52c3c4a988e5f7f1a5820a48b1d35bcdfe03d5fef8f48a70c86e4ff/freezer.state
	I1209 11:04:38.365176  714577 api_server.go:204] freezer state: "THAWED"
	I1209 11:04:38.365203  714577 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1209 11:04:38.378552  714577 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1209 11:04:38.378594  714577 status.go:463] multinode-425646 apiserver status = Running (err=<nil>)
	I1209 11:04:38.378615  714577 status.go:176] multinode-425646 status: &{Name:multinode-425646 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 11:04:38.378641  714577 status.go:174] checking status of multinode-425646-m02 ...
	I1209 11:04:38.379036  714577 cli_runner.go:164] Run: docker container inspect multinode-425646-m02 --format={{.State.Status}}
	I1209 11:04:38.396263  714577 status.go:371] multinode-425646-m02 host status = "Running" (err=<nil>)
	I1209 11:04:38.396288  714577 host.go:66] Checking if "multinode-425646-m02" exists ...
	I1209 11:04:38.396601  714577 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-425646-m02
	I1209 11:04:38.413948  714577 host.go:66] Checking if "multinode-425646-m02" exists ...
	I1209 11:04:38.414471  714577 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 11:04:38.414532  714577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-425646-m02
	I1209 11:04:38.431995  714577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33650 SSHKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/multinode-425646-m02/id_rsa Username:docker}
	I1209 11:04:38.523278  714577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:04:38.536415  714577 status.go:176] multinode-425646-m02 status: &{Name:multinode-425646-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1209 11:04:38.536453  714577 status.go:174] checking status of multinode-425646-m03 ...
	I1209 11:04:38.536797  714577 cli_runner.go:164] Run: docker container inspect multinode-425646-m03 --format={{.State.Status}}
	I1209 11:04:38.554512  714577 status.go:371] multinode-425646-m03 host status = "Stopped" (err=<nil>)
	I1209 11:04:38.554540  714577 status.go:384] host is not running, skipping remaining checks
	I1209 11:04:38.554547  714577 status.go:176] multinode-425646-m03 status: &{Name:multinode-425646-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.28s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-425646 node start m03 -v=7 --alsologtostderr: (8.842374166s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.72s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (131.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-425646
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-425646
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-425646: (24.936859262s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-425646 --wait=true -v=8 --alsologtostderr
E1209 11:05:27.567800  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/addons-764596/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:06:40.520838  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/functional-995264/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-425646 --wait=true -v=8 --alsologtostderr: (1m46.302488544s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-425646
--- PASS: TestMultiNode/serial/RestartKeepsNodes (131.37s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-425646 node delete m03: (5.150641096s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.83s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-425646 stop: (23.730146107s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-425646 status: exit status 7 (107.28004ms)

                                                
                                                
-- stdout --
	multinode-425646
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-425646-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-425646 status --alsologtostderr: exit status 7 (103.227494ms)

                                                
                                                
-- stdout --
	multinode-425646
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-425646-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 11:07:29.374729  723098 out.go:345] Setting OutFile to fd 1 ...
	I1209 11:07:29.374974  723098 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:07:29.374987  723098 out.go:358] Setting ErrFile to fd 2...
	I1209 11:07:29.374993  723098 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:07:29.375253  723098 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-586689/.minikube/bin
	I1209 11:07:29.375815  723098 out.go:352] Setting JSON to false
	I1209 11:07:29.375851  723098 mustload.go:65] Loading cluster: multinode-425646
	I1209 11:07:29.376291  723098 config.go:182] Loaded profile config "multinode-425646": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1209 11:07:29.376315  723098 status.go:174] checking status of multinode-425646 ...
	I1209 11:07:29.376855  723098 cli_runner.go:164] Run: docker container inspect multinode-425646 --format={{.State.Status}}
	I1209 11:07:29.377178  723098 notify.go:220] Checking for updates...
	I1209 11:07:29.397009  723098 status.go:371] multinode-425646 host status = "Stopped" (err=<nil>)
	I1209 11:07:29.397037  723098 status.go:384] host is not running, skipping remaining checks
	I1209 11:07:29.397045  723098 status.go:176] multinode-425646 status: &{Name:multinode-425646 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 11:07:29.397079  723098 status.go:174] checking status of multinode-425646-m02 ...
	I1209 11:07:29.397541  723098 cli_runner.go:164] Run: docker container inspect multinode-425646-m02 --format={{.State.Status}}
	I1209 11:07:29.418921  723098 status.go:371] multinode-425646-m02 host status = "Stopped" (err=<nil>)
	I1209 11:07:29.418945  723098 status.go:384] host is not running, skipping remaining checks
	I1209 11:07:29.418953  723098 status.go:176] multinode-425646-m02 status: &{Name:multinode-425646-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.94s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-425646 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E1209 11:08:03.582541  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/functional-995264/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-425646 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (51.536235085s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425646 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.22s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-425646
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-425646-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-425646-m02 --driver=docker  --container-runtime=containerd: exit status 14 (91.06212ms)

                                                
                                                
-- stdout --
	* [multinode-425646-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20068-586689/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-586689/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-425646-m02' is duplicated with machine name 'multinode-425646-m02' in profile 'multinode-425646'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-425646-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-425646-m03 --driver=docker  --container-runtime=containerd: (31.288032959s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-425646
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-425646: exit status 80 (364.891527ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-425646 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-425646-m03 already exists in multinode-425646-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-425646-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-425646-m03: (1.994133858s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.80s)

                                                
                                    
x
+
TestPreload (114.36s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-803118 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-803118 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m15.799471169s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-803118 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-803118 image pull gcr.io/k8s-minikube/busybox: (1.864343s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-803118
E1209 11:10:27.567887  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/addons-764596/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-803118: (12.055394622s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-803118 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-803118 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (21.657192696s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-803118 image list
helpers_test.go:175: Cleaning up "test-preload-803118" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-803118
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-803118: (2.599582285s)
--- PASS: TestPreload (114.36s)

                                                
                                    
x
+
TestScheduledStopUnix (105.78s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-885812 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-885812 --memory=2048 --driver=docker  --container-runtime=containerd: (29.816989612s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-885812 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-885812 -n scheduled-stop-885812
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-885812 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1209 11:11:24.259458  592080 retry.go:31] will retry after 54.086µs: open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/scheduled-stop-885812/pid: no such file or directory
I1209 11:11:24.259853  592080 retry.go:31] will retry after 159.258µs: open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/scheduled-stop-885812/pid: no such file or directory
I1209 11:11:24.261378  592080 retry.go:31] will retry after 280.696µs: open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/scheduled-stop-885812/pid: no such file or directory
I1209 11:11:24.262547  592080 retry.go:31] will retry after 483.425µs: open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/scheduled-stop-885812/pid: no such file or directory
I1209 11:11:24.263722  592080 retry.go:31] will retry after 717.246µs: open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/scheduled-stop-885812/pid: no such file or directory
I1209 11:11:24.264888  592080 retry.go:31] will retry after 959.825µs: open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/scheduled-stop-885812/pid: no such file or directory
I1209 11:11:24.266027  592080 retry.go:31] will retry after 1.499955ms: open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/scheduled-stop-885812/pid: no such file or directory
I1209 11:11:24.268309  592080 retry.go:31] will retry after 976.787µs: open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/scheduled-stop-885812/pid: no such file or directory
I1209 11:11:24.269449  592080 retry.go:31] will retry after 1.326814ms: open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/scheduled-stop-885812/pid: no such file or directory
I1209 11:11:24.271703  592080 retry.go:31] will retry after 3.126769ms: open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/scheduled-stop-885812/pid: no such file or directory
I1209 11:11:24.275976  592080 retry.go:31] will retry after 7.721854ms: open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/scheduled-stop-885812/pid: no such file or directory
I1209 11:11:24.284231  592080 retry.go:31] will retry after 10.35002ms: open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/scheduled-stop-885812/pid: no such file or directory
I1209 11:11:24.295370  592080 retry.go:31] will retry after 13.136004ms: open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/scheduled-stop-885812/pid: no such file or directory
I1209 11:11:24.309601  592080 retry.go:31] will retry after 28.052924ms: open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/scheduled-stop-885812/pid: no such file or directory
I1209 11:11:24.337768  592080 retry.go:31] will retry after 33.513012ms: open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/scheduled-stop-885812/pid: no such file or directory
I1209 11:11:24.372070  592080 retry.go:31] will retry after 29.089434ms: open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/scheduled-stop-885812/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-885812 --cancel-scheduled
E1209 11:11:40.520416  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/functional-995264/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-885812 -n scheduled-stop-885812
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-885812
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-885812 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-885812
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-885812: exit status 7 (71.098418ms)

                                                
                                                
-- stdout --
	scheduled-stop-885812
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-885812 -n scheduled-stop-885812
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-885812 -n scheduled-stop-885812: exit status 7 (78.524168ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-885812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-885812
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-885812: (4.298504888s)
--- PASS: TestScheduledStopUnix (105.78s)

                                                
                                    
x
+
TestInsufficientStorage (10.43s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-337066 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-337066 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.944002836s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f2576668-c76a-474e-99a0-0e82ef51d21b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-337066] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9df9319a-bd80-40d0-8026-10f5f325ca54","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20068"}}
	{"specversion":"1.0","id":"e9ef5044-cfcc-45d7-9f5f-4d568853fd12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"413dc6bd-9089-49d0-a909-39e93b97fb92","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20068-586689/kubeconfig"}}
	{"specversion":"1.0","id":"cc158266-165b-46ad-9ce7-7a583b8e4257","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-586689/.minikube"}}
	{"specversion":"1.0","id":"9069edf5-82fc-4eb8-91c8-ce49b2a200be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"47efa501-5925-4e5f-8dae-84fc7fda0edc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"21e497f2-a5e4-40b8-80b9-035ec0dea147","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"54096b84-8b50-406b-bc54-c7d6be09321c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"2ea93cea-33bc-463d-ae36-e55f2c05892e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"877ae0aa-2a1b-4c0b-9434-5769514d72ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"b05aee6f-570d-4770-b1f7-84666adf12ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-337066\" primary control-plane node in \"insufficient-storage-337066\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d4cf0e88-dbca-4f68-9d21-173965c2c7e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1730888964-19917 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"71e92da1-9703-4a72-884d-dfe7b5d21442","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"d2c9fd24-a975-4617-b278-0b2ed3eb8acc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-337066 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-337066 --output=json --layout=cluster: exit status 7 (282.292283ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-337066","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-337066","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 11:12:47.885328  741719 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-337066" does not appear in /home/jenkins/minikube-integration/20068-586689/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-337066 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-337066 --output=json --layout=cluster: exit status 7 (315.268017ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-337066","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-337066","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 11:12:48.201406  741779 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-337066" does not appear in /home/jenkins/minikube-integration/20068-586689/kubeconfig
	E1209 11:12:48.212188  741779 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/insufficient-storage-337066/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-337066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-337066
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-337066: (1.891291873s)
--- PASS: TestInsufficientStorage (10.43s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (93.67s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.841366716 start -p running-upgrade-433314 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.841366716 start -p running-upgrade-433314 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (49.325788235s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-433314 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-433314 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (39.762257548s)
helpers_test.go:175: Cleaning up "running-upgrade-433314" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-433314
E1209 11:20:27.567654  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/addons-764596/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-433314: (3.825686215s)
--- PASS: TestRunningBinaryUpgrade (93.67s)

                                                
                                    
x
+
TestKubernetesUpgrade (358.03s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-031296 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-031296 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m2.318221789s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-031296
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-031296: (1.229271654s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-031296 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-031296 status --format={{.Host}}: exit status 7 (75.980662ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-031296 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1209 11:15:27.567383  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/addons-764596/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-031296 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m38.964841503s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-031296 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-031296 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-031296 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (89.976219ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-031296] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20068-586689/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-586689/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-031296
	    minikube start -p kubernetes-upgrade-031296 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0312962 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-031296 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-031296 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-031296 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (10.893099269s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-031296" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-031296
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-031296: (4.341737575s)
--- PASS: TestKubernetesUpgrade (358.03s)

                                                
                                    
x
+
TestMissingContainerUpgrade (177.25s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.396455387 start -p missing-upgrade-752865 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.396455387 start -p missing-upgrade-752865 --memory=2200 --driver=docker  --container-runtime=containerd: (1m38.940099587s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-752865
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-752865: (10.34961624s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-752865
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-752865 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1209 11:16:40.520516  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/functional-995264/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-752865 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m4.698666306s)
helpers_test.go:175: Cleaning up "missing-upgrade-752865" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-752865
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-752865: (2.560816447s)
--- PASS: TestMissingContainerUpgrade (177.25s)

                                                
                                    
x
+
TestPause/serial/Start (62.85s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-896060 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-896060 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m2.850254935s)
--- PASS: TestPause/serial/Start (62.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-401066 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-401066 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (103.228785ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-401066] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20068-586689/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-586689/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-401066 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-401066 --driver=docker  --container-runtime=containerd: (40.163401129s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-401066 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-401066 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-401066 --no-kubernetes --driver=docker  --container-runtime=containerd: (15.753821687s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-401066 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-401066 status -o json: exit status 2 (293.238862ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-401066","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-401066
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-401066: (1.953514246s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-401066 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-401066 --no-kubernetes --driver=docker  --container-runtime=containerd: (9.093510342s)
--- PASS: TestNoKubernetes/serial/Start (9.09s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.61s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-896060 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-896060 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.58724893s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-401066 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-401066 "sudo systemctl is-active --quiet service kubelet": exit status 1 (421.191729ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-401066
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-401066: (1.276391582s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestPause/serial/Pause (1.18s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-896060 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-896060 --alsologtostderr -v=5: (1.180062215s)
--- PASS: TestPause/serial/Pause (1.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-401066 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-401066 --driver=docker  --container-runtime=containerd: (7.271997381s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.27s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.48s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-896060 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-896060 --output=json --layout=cluster: exit status 2 (480.399555ms)

                                                
                                                
-- stdout --
	{"Name":"pause-896060","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-896060","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.48s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.76s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-896060 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.76s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.88s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-896060 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.88s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.74s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-896060 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-896060 --alsologtostderr -v=5: (2.738940118s)
--- PASS: TestPause/serial/DeletePaused (2.74s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.48s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-896060
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-896060: exit status 1 (20.765827ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-896060: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-401066 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-401066 "sudo systemctl is-active --quiet service kubelet": exit status 1 (414.522377ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (106.22s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3415680952 start -p stopped-upgrade-645358 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3415680952 start -p stopped-upgrade-645358 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (46.03073917s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3415680952 -p stopped-upgrade-645358 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3415680952 -p stopped-upgrade-645358 stop: (20.006539031s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-645358 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1209 11:18:30.635845  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/addons-764596/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-645358 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (40.184646744s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (106.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.13s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-645358
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-645358: (1.134780268s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-776941 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-776941 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (247.010223ms)

                                                
                                                
-- stdout --
	* [false-776941] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20068-586689/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-586689/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 11:20:33.453493  782291 out.go:345] Setting OutFile to fd 1 ...
	I1209 11:20:33.453735  782291 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:20:33.453762  782291 out.go:358] Setting ErrFile to fd 2...
	I1209 11:20:33.453784  782291 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:20:33.454079  782291 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-586689/.minikube/bin
	I1209 11:20:33.454540  782291 out.go:352] Setting JSON to false
	I1209 11:20:33.455551  782291 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14581,"bootTime":1733728653,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1209 11:20:33.455649  782291 start.go:139] virtualization:  
	I1209 11:20:33.460089  782291 out.go:177] * [false-776941] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1209 11:20:33.463244  782291 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 11:20:33.463308  782291 notify.go:220] Checking for updates...
	I1209 11:20:33.468920  782291 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 11:20:33.471299  782291 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20068-586689/kubeconfig
	I1209 11:20:33.474000  782291 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-586689/.minikube
	I1209 11:20:33.476831  782291 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 11:20:33.479824  782291 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 11:20:33.482851  782291 config.go:182] Loaded profile config "force-systemd-flag-900483": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1209 11:20:33.482984  782291 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 11:20:33.516356  782291 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1209 11:20:33.516479  782291 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 11:20:33.612729  782291 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:35 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-09 11:20:33.603804845 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1209 11:20:33.612846  782291 docker.go:318] overlay module found
	I1209 11:20:33.615384  782291 out.go:177] * Using the docker driver based on user configuration
	I1209 11:20:33.617641  782291 start.go:297] selected driver: docker
	I1209 11:20:33.617667  782291 start.go:901] validating driver "docker" against <nil>
	I1209 11:20:33.617695  782291 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 11:20:33.621314  782291 out.go:201] 
	W1209 11:20:33.623273  782291 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1209 11:20:33.625678  782291 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-776941 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-776941

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-776941

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-776941

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-776941

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-776941

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-776941

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-776941

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-776941

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-776941

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-776941

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-776941"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-776941"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-776941"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-776941

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-776941"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-776941"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-776941" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-776941" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-776941" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-776941" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-776941" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-776941" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-776941" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-776941" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-776941"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-776941"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-776941"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-776941"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-776941"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-776941" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-776941" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-776941" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-776941"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-776941"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-776941"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-776941"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-776941"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-776941

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-776941"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-776941"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-776941"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-776941"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-776941"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-776941"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-776941"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-776941"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-776941"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-776941"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-776941"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-776941"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-776941"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-776941"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-776941"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-776941"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-776941"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-776941"

                                                
                                                
----------------------- debugLogs end: false-776941 [took: 4.433960185s] --------------------------------
helpers_test.go:175: Cleaning up "false-776941" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-776941
--- PASS: TestNetworkPlugins/group/false (4.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (172.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-623695 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-623695 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m52.723538167s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (172.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (76.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-239649 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-239649 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (1m16.411632729s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (76.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-623695 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [db9e64b4-2527-4741-94c5-357369d5df4f] Pending
helpers_test.go:344: "busybox" [db9e64b4-2527-4741-94c5-357369d5df4f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [db9e64b4-2527-4741-94c5-357369d5df4f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004133412s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-623695 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-623695 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-623695 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.324887758s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-623695 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-623695 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-623695 --alsologtostderr -v=3: (13.01170933s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-623695 -n old-k8s-version-623695
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-623695 -n old-k8s-version-623695: exit status 7 (113.115784ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-623695 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-239649 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [981cdab4-3267-4988-a51a-14e565d9500d] Pending
helpers_test.go:344: "busybox" [981cdab4-3267-4988-a51a-14e565d9500d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [981cdab4-3267-4988-a51a-14e565d9500d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004410537s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-239649 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-239649 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-239649 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.274093277s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-239649 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-239649 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-239649 --alsologtostderr -v=3: (12.117014792s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-239649 -n no-preload-239649
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-239649 -n no-preload-239649: exit status 7 (84.618253ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-239649 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (267.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-239649 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
E1209 11:26:40.520447  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/functional-995264/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:30:27.568082  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/addons-764596/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-239649 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (4m26.737337587s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-239649 -n no-preload-239649
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (267.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-frkp5" [dd980b10-5d0a-4e11-8a91-547974207b68] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004894688s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-frkp5" [dd980b10-5d0a-4e11-8a91-547974207b68] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004282356s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-239649 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-239649 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-239649 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-239649 -n no-preload-239649
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-239649 -n no-preload-239649: exit status 2 (394.517371ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-239649 -n no-preload-239649
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-239649 -n no-preload-239649: exit status 2 (346.067422ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-239649 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-239649 -n no-preload-239649
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-239649 -n no-preload-239649
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (67.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-545509 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-545509 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (1m7.827390712s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (67.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-lgxbj" [4fbab08d-1124-431a-8f49-f5b88f1e91ad] Running
E1209 11:31:40.519797  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/functional-995264/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0036323s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-lgxbj" [4fbab08d-1124-431a-8f49-f5b88f1e91ad] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004100467s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-623695 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-623695 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-623695 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-623695 --alsologtostderr -v=1: (1.092734597s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-623695 -n old-k8s-version-623695
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-623695 -n old-k8s-version-623695: exit status 2 (465.039489ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-623695 -n old-k8s-version-623695
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-623695 -n old-k8s-version-623695: exit status 2 (425.669394ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-623695 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-623695 -n old-k8s-version-623695
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-623695 -n old-k8s-version-623695
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (62.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-530202 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-530202 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (1m2.461997258s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (62.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-545509 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [047ed1e9-d9bf-413f-a6d6-da14cabff918] Pending
helpers_test.go:344: "busybox" [047ed1e9-d9bf-413f-a6d6-da14cabff918] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [047ed1e9-d9bf-413f-a6d6-da14cabff918] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.00412058s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-545509 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-545509 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-545509 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.094524918s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-545509 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-545509 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-545509 --alsologtostderr -v=3: (12.016276883s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-545509 -n embed-certs-545509
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-545509 -n embed-certs-545509: exit status 7 (80.89537ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-545509 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (280.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-545509 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-545509 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (4m39.971529948s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-545509 -n embed-certs-545509
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (280.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-530202 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c91a63ef-00c0-4cdf-b51f-b533bbf5f9d3] Pending
helpers_test.go:344: "busybox" [c91a63ef-00c0-4cdf-b51f-b533bbf5f9d3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c91a63ef-00c0-4cdf-b51f-b533bbf5f9d3] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004050862s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-530202 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-530202 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-530202 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.368903694s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-530202 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-530202 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-530202 --alsologtostderr -v=3: (12.285935387s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-530202 -n default-k8s-diff-port-530202
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-530202 -n default-k8s-diff-port-530202: exit status 7 (98.524987ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-530202 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-530202 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
E1209 11:34:56.974624  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/old-k8s-version-623695/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:34:56.981247  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/old-k8s-version-623695/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:34:56.992764  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/old-k8s-version-623695/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:34:57.014240  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/old-k8s-version-623695/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:34:57.055819  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/old-k8s-version-623695/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:34:57.137300  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/old-k8s-version-623695/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:34:57.298727  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/old-k8s-version-623695/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:34:57.620459  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/old-k8s-version-623695/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:34:58.262428  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/old-k8s-version-623695/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:34:59.544265  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/old-k8s-version-623695/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:35:02.106110  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/old-k8s-version-623695/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:35:07.227454  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/old-k8s-version-623695/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:35:10.637651  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/addons-764596/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:35:17.469723  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/old-k8s-version-623695/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:35:27.567499  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/addons-764596/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:35:37.951959  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/old-k8s-version-623695/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:36:08.513294  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/no-preload-239649/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:36:08.519761  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/no-preload-239649/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:36:08.531264  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/no-preload-239649/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:36:08.552902  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/no-preload-239649/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:36:08.594351  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/no-preload-239649/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:36:08.675886  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/no-preload-239649/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:36:08.837544  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/no-preload-239649/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:36:09.159391  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/no-preload-239649/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:36:09.801382  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/no-preload-239649/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:36:11.082812  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/no-preload-239649/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:36:13.644189  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/no-preload-239649/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:36:18.766209  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/no-preload-239649/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:36:18.913790  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/old-k8s-version-623695/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:36:29.008314  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/no-preload-239649/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:36:40.520855  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/functional-995264/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:36:49.490629  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/no-preload-239649/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-530202 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (4m27.488599689s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-530202 -n default-k8s-diff-port-530202
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-7pf4v" [ea95a333-f44f-431a-b282-3c3cbcefbd36] Running
E1209 11:37:30.452050  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/no-preload-239649/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004223388s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-7pf4v" [ea95a333-f44f-431a-b282-3c3cbcefbd36] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003965565s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-545509 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-545509 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-545509 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-545509 --alsologtostderr -v=1: (1.13935019s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-545509 -n embed-certs-545509
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-545509 -n embed-certs-545509: exit status 2 (335.045451ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-545509 -n embed-certs-545509
E1209 11:37:40.836165  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/old-k8s-version-623695/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-545509 -n embed-certs-545509: exit status 2 (332.915886ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-545509 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-545509 -n embed-certs-545509
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-545509 -n embed-certs-545509
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (39.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-919044 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-919044 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (39.131780274s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (39.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-q45tn" [85510b7d-86aa-47ab-92b0-25308dcf3c3c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004615s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-q45tn" [85510b7d-86aa-47ab-92b0-25308dcf3c3c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008235431s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-530202 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-530202 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-530202 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-530202 --alsologtostderr -v=1: (1.335894654s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-530202 -n default-k8s-diff-port-530202
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-530202 -n default-k8s-diff-port-530202: exit status 2 (422.567873ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-530202 -n default-k8s-diff-port-530202
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-530202 -n default-k8s-diff-port-530202: exit status 2 (411.066719ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-530202 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-530202 -n default-k8s-diff-port-530202
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-530202 -n default-k8s-diff-port-530202
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (60.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-776941 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-776941 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m0.543290948s)
--- PASS: TestNetworkPlugins/group/auto/Start (60.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-919044 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-919044 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.25328692s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-919044 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-919044 --alsologtostderr -v=3: (1.35451772s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-919044 -n newest-cni-919044
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-919044 -n newest-cni-919044: exit status 7 (93.222136ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-919044 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (22.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-919044 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-919044 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (22.041781004s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-919044 -n newest-cni-919044
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (22.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-919044 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-919044 --alsologtostderr -v=1
E1209 11:38:52.373478  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/no-preload-239649/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p newest-cni-919044 --alsologtostderr -v=1: (1.371095274s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-919044 -n newest-cni-919044
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-919044 -n newest-cni-919044: exit status 2 (429.307647ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-919044 -n newest-cni-919044
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-919044 -n newest-cni-919044: exit status 2 (431.281517ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-919044 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-919044 -n newest-cni-919044
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-919044 -n newest-cni-919044
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.06s)
E1209 11:44:08.701549  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/auto-776941/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:44:08.707968  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/auto-776941/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:44:08.719446  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/auto-776941/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:44:08.740998  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/auto-776941/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:44:08.782477  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/auto-776941/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:44:08.863990  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/auto-776941/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:44:09.025641  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/auto-776941/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:44:09.347667  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/auto-776941/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:44:09.989710  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/auto-776941/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:44:11.271193  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/auto-776941/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:44:13.833327  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/auto-776941/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (71.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-776941 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-776941 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m11.992026138s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (71.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-776941 "pgrep -a kubelet"
I1209 11:39:08.298002  592080 config.go:182] Loaded profile config "auto-776941": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-776941 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5l5jg" [0def7e59-352e-4808-b73a-3f77043903a8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5l5jg" [0def7e59-352e-4808-b73a-3f77043903a8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004735597s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-776941 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-776941 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-776941 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (68.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-776941 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E1209 11:39:56.974275  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/old-k8s-version-623695/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-776941 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m8.690809743s)
--- PASS: TestNetworkPlugins/group/calico/Start (68.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-2s4f8" [18403d06-1201-4037-ae17-d38a80dcb7ce] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004139869s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-776941 "pgrep -a kubelet"
I1209 11:40:16.477629  592080 config.go:182] Loaded profile config "kindnet-776941": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-776941 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gb9tm" [85d1ccc4-9c15-42a0-a646-128cf684e1be] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gb9tm" [85d1ccc4-9c15-42a0-a646-128cf684e1be] Running
E1209 11:40:24.677781  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/old-k8s-version-623695/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:40:27.567977  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/addons-764596/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004597187s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-776941 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-776941 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-776941 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (57.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-776941 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-776941 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (57.520220073s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (57.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-g482k" [65c4415b-6979-422e-9671-43e029a7e299] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.071629889s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-776941 "pgrep -a kubelet"
I1209 11:41:00.821530  592080 config.go:182] Loaded profile config "calico-776941": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-776941 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ln5v7" [5a6f94e4-1adb-40c4-9c9d-c3c481177127] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ln5v7" [5a6f94e4-1adb-40c4-9c9d-c3c481177127] Running
E1209 11:41:08.512857  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/no-preload-239649/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004184243s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-776941 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-776941 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-776941 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (72.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-776941 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E1209 11:41:40.520037  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/functional-995264/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-776941 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m12.084365685s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (72.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-776941 "pgrep -a kubelet"
I1209 11:41:50.053441  592080 config.go:182] Loaded profile config "custom-flannel-776941": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-776941 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6zsn6" [c063d1b2-5b23-44e2-ba9c-6b7b3cc8eb51] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6zsn6" [c063d1b2-5b23-44e2-ba9c-6b7b3cc8eb51] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.005230083s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-776941 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-776941 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-776941 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (53.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-776941 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-776941 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (53.880575716s)
--- PASS: TestNetworkPlugins/group/flannel/Start (53.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-776941 "pgrep -a kubelet"
I1209 11:42:52.461589  592080 config.go:182] Loaded profile config "enable-default-cni-776941": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-776941 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mvr2n" [37c12b31-563f-46dc-9365-bc47c112c271] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-mvr2n" [37c12b31-563f-46dc-9365-bc47c112c271] Running
E1209 11:42:57.986745  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/default-k8s-diff-port-530202/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:42:57.993300  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/default-k8s-diff-port-530202/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:42:58.008862  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/default-k8s-diff-port-530202/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:42:58.030434  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/default-k8s-diff-port-530202/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:42:58.071893  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/default-k8s-diff-port-530202/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:42:58.153360  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/default-k8s-diff-port-530202/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:42:58.314867  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/default-k8s-diff-port-530202/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:42:58.636637  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/default-k8s-diff-port-530202/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:42:59.277979  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/default-k8s-diff-port-530202/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:43:00.559578  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/default-k8s-diff-port-530202/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004060314s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-776941 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-776941 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1209 11:43:03.121325  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/default-k8s-diff-port-530202/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-776941 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-q7mqp" [e3bab014-d24f-4f58-a48e-3d961b869d8f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005796061s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (50.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-776941 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-776941 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (50.58292032s)
--- PASS: TestNetworkPlugins/group/bridge/Start (50.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-776941 "pgrep -a kubelet"
I1209 11:43:25.050124  592080 config.go:182] Loaded profile config "flannel-776941": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-776941 replace --force -f testdata/netcat-deployment.yaml
I1209 11:43:25.412294  592080 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-pffwc" [0143fe2b-ceac-4af1-802f-648cdd5fe613] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-pffwc" [0143fe2b-ceac-4af1-802f-648cdd5fe613] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.00932845s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-776941 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-776941 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-776941 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-776941 "pgrep -a kubelet"
I1209 11:44:15.217473  592080 config.go:182] Loaded profile config "bridge-776941": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-776941 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9z4ns" [df2ceba6-05dc-4388-a90d-8164df8c83fc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1209 11:44:18.955314  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/auto-776941/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-9z4ns" [df2ceba6-05dc-4388-a90d-8164df8c83fc] Running
E1209 11:44:19.928413  592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/default-k8s-diff-port-530202/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003876937s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-776941 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-776941 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-776941 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (29/330)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.57s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-971916 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-971916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-971916
--- SKIP: TestDownloadOnlyKic (0.57s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-350992" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-350992
--- SKIP: TestStartStop/group/disable-driver-mounts (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-776941 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-776941

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-776941

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-776941

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-776941

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-776941

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-776941

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-776941

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-776941

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-776941

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-776941

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-776941"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-776941"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-776941"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-776941

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-776941"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-776941"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-776941" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-776941" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-776941" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-776941" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-776941" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-776941" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-776941" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-776941" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-776941"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-776941"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-776941"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-776941"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-776941"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-776941" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-776941" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-776941" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-776941"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-776941"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-776941"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-776941"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-776941"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-776941

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-776941"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-776941"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-776941"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-776941"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-776941"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-776941"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-776941"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-776941"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-776941"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-776941"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-776941"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-776941"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-776941"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-776941"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-776941"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-776941"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-776941"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-776941"

                                                
                                                
----------------------- debugLogs end: kubenet-776941 [took: 4.643818118s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-776941" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-776941
--- SKIP: TestNetworkPlugins/group/kubenet (4.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-776941 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-776941

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-776941

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-776941

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-776941

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-776941

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-776941

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-776941

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-776941

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-776941

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-776941

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-776941"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-776941"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-776941"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-776941

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-776941"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-776941"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-776941" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-776941" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-776941" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-776941" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-776941" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-776941" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-776941" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-776941" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-776941"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-776941"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-776941"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-776941"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-776941"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-776941

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-776941

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-776941" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-776941" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-776941

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-776941

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-776941" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-776941" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-776941" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-776941" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-776941" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-776941"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-776941"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-776941"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-776941"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-776941"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-776941

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-776941"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-776941"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-776941"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-776941"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-776941"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-776941"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-776941"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-776941"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-776941"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-776941"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-776941"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-776941"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-776941"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-776941"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-776941"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-776941"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-776941"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-776941" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-776941"

                                                
                                                
----------------------- debugLogs end: cilium-776941 [took: 4.826325072s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-776941" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-776941
--- SKIP: TestNetworkPlugins/group/cilium (5.06s)

                                                
                                    
Copied to clipboard