Test Report: Docker_Linux_containerd_arm64 20317

                    
                      bb508b30435b2a744d00b2f75d06f98d338973f1:2025-01-27:38093
                    
                

Test fail (1/330)

Order failed test Duration
304 TestStartStop/group/old-k8s-version/serial/SecondStart 376.37
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (376.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-813213 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-813213 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m12.364929582s)

                                                
                                                
-- stdout --
	* [old-k8s-version-813213] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20317
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20317-1181389/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-1181389/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-813213" primary control-plane node in "old-k8s-version-813213" cluster
	* Pulling base image v0.0.46 ...
	* Restarting existing docker container for "old-k8s-version-813213" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.24 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-813213 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 13:18:13.624763 1391899 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:18:13.625186 1391899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:18:13.625231 1391899 out.go:358] Setting ErrFile to fd 2...
	I0127 13:18:13.625261 1391899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:18:13.625650 1391899 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-1181389/.minikube/bin
	I0127 13:18:13.626248 1391899 out.go:352] Setting JSON to false
	I0127 13:18:13.627752 1391899 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":21639,"bootTime":1737962255,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0127 13:18:13.627881 1391899 start.go:139] virtualization:  
	I0127 13:18:13.635568 1391899 out.go:177] * [old-k8s-version-813213] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0127 13:18:13.638498 1391899 out.go:177]   - MINIKUBE_LOCATION=20317
	I0127 13:18:13.638527 1391899 notify.go:220] Checking for updates...
	I0127 13:18:13.644677 1391899 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 13:18:13.647182 1391899 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20317-1181389/kubeconfig
	I0127 13:18:13.649683 1391899 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-1181389/.minikube
	I0127 13:18:13.652516 1391899 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0127 13:18:13.655141 1391899 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 13:18:13.658354 1391899 config.go:182] Loaded profile config "old-k8s-version-813213": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0127 13:18:13.661616 1391899 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0127 13:18:13.664176 1391899 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 13:18:13.731148 1391899 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 13:18:13.731314 1391899 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 13:18:13.847397 1391899 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:61 SystemTime:2025-01-27 13:18:13.837458126 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 13:18:13.847518 1391899 docker.go:318] overlay module found
	I0127 13:18:13.850595 1391899 out.go:177] * Using the docker driver based on existing profile
	I0127 13:18:13.853110 1391899 start.go:297] selected driver: docker
	I0127 13:18:13.853149 1391899 start.go:901] validating driver "docker" against &{Name:old-k8s-version-813213 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-813213 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:18:13.853262 1391899 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 13:18:13.853997 1391899 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 13:18:13.960155 1391899 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:61 SystemTime:2025-01-27 13:18:13.938431586 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 13:18:13.960572 1391899 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 13:18:13.960603 1391899 cni.go:84] Creating CNI manager for ""
	I0127 13:18:13.960652 1391899 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0127 13:18:13.960697 1391899 start.go:340] cluster config:
	{Name:old-k8s-version-813213 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-813213 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:contai
nerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:18:13.965201 1391899 out.go:177] * Starting "old-k8s-version-813213" primary control-plane node in "old-k8s-version-813213" cluster
	I0127 13:18:13.967819 1391899 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0127 13:18:13.970367 1391899 out.go:177] * Pulling base image v0.0.46 ...
	I0127 13:18:13.972849 1391899 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0127 13:18:13.972908 1391899 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20317-1181389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0127 13:18:13.972923 1391899 cache.go:56] Caching tarball of preloaded images
	I0127 13:18:13.973021 1391899 preload.go:172] Found /home/jenkins/minikube-integration/20317-1181389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0127 13:18:13.973052 1391899 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0127 13:18:13.973165 1391899 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/old-k8s-version-813213/config.json ...
	I0127 13:18:13.973383 1391899 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0127 13:18:14.018222 1391899 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I0127 13:18:14.018251 1391899 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I0127 13:18:14.018264 1391899 cache.go:227] Successfully downloaded all kic artifacts
	I0127 13:18:14.018289 1391899 start.go:360] acquireMachinesLock for old-k8s-version-813213: {Name:mkdb8ba967fbef4a000dd6e7c9825cdd41640f4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:18:14.018352 1391899 start.go:364] duration metric: took 40.943µs to acquireMachinesLock for "old-k8s-version-813213"
	I0127 13:18:14.018379 1391899 start.go:96] Skipping create...Using existing machine configuration
	I0127 13:18:14.018388 1391899 fix.go:54] fixHost starting: 
	I0127 13:18:14.018655 1391899 cli_runner.go:164] Run: docker container inspect old-k8s-version-813213 --format={{.State.Status}}
	I0127 13:18:14.050862 1391899 fix.go:112] recreateIfNeeded on old-k8s-version-813213: state=Stopped err=<nil>
	W0127 13:18:14.050897 1391899 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 13:18:14.053885 1391899 out.go:177] * Restarting existing docker container for "old-k8s-version-813213" ...
	I0127 13:18:14.056502 1391899 cli_runner.go:164] Run: docker start old-k8s-version-813213
	I0127 13:18:14.513107 1391899 cli_runner.go:164] Run: docker container inspect old-k8s-version-813213 --format={{.State.Status}}
	I0127 13:18:14.552659 1391899 kic.go:430] container "old-k8s-version-813213" state is running.
	I0127 13:18:14.553079 1391899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-813213
	I0127 13:18:14.577336 1391899 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/old-k8s-version-813213/config.json ...
	I0127 13:18:14.577562 1391899 machine.go:93] provisionDockerMachine start ...
	I0127 13:18:14.577626 1391899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-813213
	I0127 13:18:14.610941 1391899 main.go:141] libmachine: Using SSH client type: native
	I0127 13:18:14.611198 1391899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 34227 <nil> <nil>}
	I0127 13:18:14.611207 1391899 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 13:18:14.612217 1391899 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42146->127.0.0.1:34227: read: connection reset by peer
	I0127 13:18:17.761119 1391899 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-813213
	
	I0127 13:18:17.761150 1391899 ubuntu.go:169] provisioning hostname "old-k8s-version-813213"
	I0127 13:18:17.761227 1391899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-813213
	I0127 13:18:17.791317 1391899 main.go:141] libmachine: Using SSH client type: native
	I0127 13:18:17.791561 1391899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 34227 <nil> <nil>}
	I0127 13:18:17.791580 1391899 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-813213 && echo "old-k8s-version-813213" | sudo tee /etc/hostname
	I0127 13:18:17.951706 1391899 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-813213
	
	I0127 13:18:17.951865 1391899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-813213
	I0127 13:18:17.981356 1391899 main.go:141] libmachine: Using SSH client type: native
	I0127 13:18:17.981618 1391899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 34227 <nil> <nil>}
	I0127 13:18:17.981635 1391899 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-813213' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-813213/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-813213' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 13:18:18.117544 1391899 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 13:18:18.117573 1391899 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20317-1181389/.minikube CaCertPath:/home/jenkins/minikube-integration/20317-1181389/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20317-1181389/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20317-1181389/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20317-1181389/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20317-1181389/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20317-1181389/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20317-1181389/.minikube}
	I0127 13:18:18.117593 1391899 ubuntu.go:177] setting up certificates
	I0127 13:18:18.117604 1391899 provision.go:84] configureAuth start
	I0127 13:18:18.117675 1391899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-813213
	I0127 13:18:18.140023 1391899 provision.go:143] copyHostCerts
	I0127 13:18:18.140110 1391899 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-1181389/.minikube/ca.pem, removing ...
	I0127 13:18:18.140132 1391899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-1181389/.minikube/ca.pem
	I0127 13:18:18.140209 1391899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-1181389/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20317-1181389/.minikube/ca.pem (1082 bytes)
	I0127 13:18:18.140308 1391899 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-1181389/.minikube/cert.pem, removing ...
	I0127 13:18:18.140319 1391899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-1181389/.minikube/cert.pem
	I0127 13:18:18.140348 1391899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-1181389/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20317-1181389/.minikube/cert.pem (1123 bytes)
	I0127 13:18:18.140407 1391899 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-1181389/.minikube/key.pem, removing ...
	I0127 13:18:18.140414 1391899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-1181389/.minikube/key.pem
	I0127 13:18:18.140438 1391899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-1181389/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20317-1181389/.minikube/key.pem (1675 bytes)
	I0127 13:18:18.140491 1391899 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20317-1181389/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20317-1181389/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20317-1181389/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-813213 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-813213]
	I0127 13:18:18.565007 1391899 provision.go:177] copyRemoteCerts
	I0127 13:18:18.565099 1391899 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 13:18:18.565141 1391899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-813213
	I0127 13:18:18.581838 1391899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/20317-1181389/.minikube/machines/old-k8s-version-813213/id_rsa Username:docker}
	I0127 13:18:18.673662 1391899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-1181389/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0127 13:18:18.698715 1391899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-1181389/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0127 13:18:18.722959 1391899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-1181389/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 13:18:18.747579 1391899 provision.go:87] duration metric: took 629.95728ms to configureAuth
	I0127 13:18:18.747608 1391899 ubuntu.go:193] setting minikube options for container-runtime
	I0127 13:18:18.747802 1391899 config.go:182] Loaded profile config "old-k8s-version-813213": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0127 13:18:18.747814 1391899 machine.go:96] duration metric: took 4.170244742s to provisionDockerMachine
	I0127 13:18:18.747822 1391899 start.go:293] postStartSetup for "old-k8s-version-813213" (driver="docker")
	I0127 13:18:18.747833 1391899 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 13:18:18.747897 1391899 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 13:18:18.747940 1391899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-813213
	I0127 13:18:18.765180 1391899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/20317-1181389/.minikube/machines/old-k8s-version-813213/id_rsa Username:docker}
	I0127 13:18:18.855629 1391899 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 13:18:18.859732 1391899 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0127 13:18:18.859775 1391899 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0127 13:18:18.859786 1391899 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0127 13:18:18.859797 1391899 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0127 13:18:18.859809 1391899 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-1181389/.minikube/addons for local assets ...
	I0127 13:18:18.859868 1391899 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-1181389/.minikube/files for local assets ...
	I0127 13:18:18.859950 1391899 filesync.go:149] local asset: /home/jenkins/minikube-integration/20317-1181389/.minikube/files/etc/ssl/certs/11867732.pem -> 11867732.pem in /etc/ssl/certs
	I0127 13:18:18.860073 1391899 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 13:18:18.871076 1391899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-1181389/.minikube/files/etc/ssl/certs/11867732.pem --> /etc/ssl/certs/11867732.pem (1708 bytes)
	I0127 13:18:18.901681 1391899 start.go:296] duration metric: took 153.84267ms for postStartSetup
	I0127 13:18:18.901770 1391899 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 13:18:18.901815 1391899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-813213
	I0127 13:18:18.926057 1391899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/20317-1181389/.minikube/machines/old-k8s-version-813213/id_rsa Username:docker}
	I0127 13:18:19.018654 1391899 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0127 13:18:19.024410 1391899 fix.go:56] duration metric: took 5.006014118s for fixHost
	I0127 13:18:19.024432 1391899 start.go:83] releasing machines lock for "old-k8s-version-813213", held for 5.006066358s
	I0127 13:18:19.024509 1391899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-813213
	I0127 13:18:19.050859 1391899 ssh_runner.go:195] Run: cat /version.json
	I0127 13:18:19.050912 1391899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-813213
	I0127 13:18:19.051223 1391899 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 13:18:19.051274 1391899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-813213
	I0127 13:18:19.072212 1391899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/20317-1181389/.minikube/machines/old-k8s-version-813213/id_rsa Username:docker}
	I0127 13:18:19.090258 1391899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/20317-1181389/.minikube/machines/old-k8s-version-813213/id_rsa Username:docker}
	I0127 13:18:19.176722 1391899 ssh_runner.go:195] Run: systemctl --version
	I0127 13:18:19.331346 1391899 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0127 13:18:19.336207 1391899 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0127 13:18:19.358024 1391899 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0127 13:18:19.358101 1391899 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 13:18:19.370592 1391899 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0127 13:18:19.370616 1391899 start.go:495] detecting cgroup driver to use...
	I0127 13:18:19.370648 1391899 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0127 13:18:19.370696 1391899 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 13:18:19.387096 1391899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 13:18:19.403981 1391899 docker.go:217] disabling cri-docker service (if available) ...
	I0127 13:18:19.404077 1391899 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 13:18:19.418605 1391899 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 13:18:19.431651 1391899 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 13:18:19.546224 1391899 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 13:18:19.661828 1391899 docker.go:233] disabling docker service ...
	I0127 13:18:19.661916 1391899 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 13:18:19.683055 1391899 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 13:18:19.698399 1391899 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 13:18:19.798047 1391899 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 13:18:19.895332 1391899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 13:18:19.908036 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 13:18:19.924905 1391899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0127 13:18:19.936750 1391899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 13:18:19.954857 1391899 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 13:18:19.955006 1391899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 13:18:19.975264 1391899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 13:18:19.985973 1391899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 13:18:19.996566 1391899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 13:18:20.008030 1391899 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 13:18:20.019931 1391899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 13:18:20.031827 1391899 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 13:18:20.042822 1391899 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 13:18:20.053350 1391899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:18:20.165304 1391899 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 13:18:20.363734 1391899 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0127 13:18:20.363814 1391899 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 13:18:20.367637 1391899 start.go:563] Will wait 60s for crictl version
	I0127 13:18:20.367743 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:18:20.371254 1391899 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 13:18:20.429795 1391899 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.24
	RuntimeApiVersion:  v1
	I0127 13:18:20.429905 1391899 ssh_runner.go:195] Run: containerd --version
	I0127 13:18:20.457692 1391899 ssh_runner.go:195] Run: containerd --version
	I0127 13:18:20.489582 1391899 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.24 ...
	I0127 13:18:20.492602 1391899 cli_runner.go:164] Run: docker network inspect old-k8s-version-813213 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0127 13:18:20.512007 1391899 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0127 13:18:20.515924 1391899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:18:20.527444 1391899 kubeadm.go:883] updating cluster {Name:old-k8s-version-813213 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-813213 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 13:18:20.527569 1391899 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0127 13:18:20.527628 1391899 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:18:20.572671 1391899 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 13:18:20.572694 1391899 containerd.go:534] Images already preloaded, skipping extraction
	I0127 13:18:20.572754 1391899 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:18:20.627158 1391899 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 13:18:20.627191 1391899 cache_images.go:84] Images are preloaded, skipping loading
	I0127 13:18:20.627200 1391899 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
	I0127 13:18:20.627364 1391899 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-813213 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-813213 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 13:18:20.627488 1391899 ssh_runner.go:195] Run: sudo crictl info
	I0127 13:18:20.685832 1391899 cni.go:84] Creating CNI manager for ""
	I0127 13:18:20.685860 1391899 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0127 13:18:20.685870 1391899 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 13:18:20.685912 1391899 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-813213 NodeName:old-k8s-version-813213 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0127 13:18:20.686081 1391899 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-813213"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 13:18:20.686165 1391899 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0127 13:18:20.696130 1391899 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 13:18:20.696229 1391899 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 13:18:20.705874 1391899 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0127 13:18:20.724903 1391899 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 13:18:20.746603 1391899 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0127 13:18:20.771320 1391899 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0127 13:18:20.774961 1391899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:18:20.788295 1391899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:18:20.922044 1391899 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:18:20.944827 1391899 certs.go:68] Setting up /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/old-k8s-version-813213 for IP: 192.168.76.2
	I0127 13:18:20.944846 1391899 certs.go:194] generating shared ca certs ...
	I0127 13:18:20.944863 1391899 certs.go:226] acquiring lock for ca certs: {Name:mk935ce1b2e17056c705e5bfeb742a058476d97f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:18:20.945001 1391899 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20317-1181389/.minikube/ca.key
	I0127 13:18:20.945143 1391899 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20317-1181389/.minikube/proxy-client-ca.key
	I0127 13:18:20.945153 1391899 certs.go:256] generating profile certs ...
	I0127 13:18:20.945241 1391899 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/old-k8s-version-813213/client.key
	I0127 13:18:20.945306 1391899 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/old-k8s-version-813213/apiserver.key.9b729343
	I0127 13:18:20.945348 1391899 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/old-k8s-version-813213/proxy-client.key
	I0127 13:18:20.945475 1391899 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-1181389/.minikube/certs/1186773.pem (1338 bytes)
	W0127 13:18:20.945509 1391899 certs.go:480] ignoring /home/jenkins/minikube-integration/20317-1181389/.minikube/certs/1186773_empty.pem, impossibly tiny 0 bytes
	I0127 13:18:20.945517 1391899 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-1181389/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 13:18:20.945553 1391899 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-1181389/.minikube/certs/ca.pem (1082 bytes)
	I0127 13:18:20.945579 1391899 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-1181389/.minikube/certs/cert.pem (1123 bytes)
	I0127 13:18:20.945600 1391899 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-1181389/.minikube/certs/key.pem (1675 bytes)
	I0127 13:18:20.945644 1391899 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-1181389/.minikube/files/etc/ssl/certs/11867732.pem (1708 bytes)
	I0127 13:18:20.946378 1391899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-1181389/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 13:18:21.024599 1391899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-1181389/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 13:18:21.089591 1391899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-1181389/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 13:18:21.138574 1391899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-1181389/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 13:18:21.184382 1391899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/old-k8s-version-813213/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0127 13:18:21.230740 1391899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/old-k8s-version-813213/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 13:18:21.278096 1391899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/old-k8s-version-813213/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 13:18:21.315319 1391899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/old-k8s-version-813213/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 13:18:21.348994 1391899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-1181389/.minikube/certs/1186773.pem --> /usr/share/ca-certificates/1186773.pem (1338 bytes)
	I0127 13:18:21.394974 1391899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-1181389/.minikube/files/etc/ssl/certs/11867732.pem --> /usr/share/ca-certificates/11867732.pem (1708 bytes)
	I0127 13:18:21.437930 1391899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-1181389/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 13:18:21.471330 1391899 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 13:18:21.494863 1391899 ssh_runner.go:195] Run: openssl version
	I0127 13:18:21.502715 1391899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11867732.pem && ln -fs /usr/share/ca-certificates/11867732.pem /etc/ssl/certs/11867732.pem"
	I0127 13:18:21.519232 1391899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11867732.pem
	I0127 13:18:21.523603 1391899 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 12:39 /usr/share/ca-certificates/11867732.pem
	I0127 13:18:21.523716 1391899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11867732.pem
	I0127 13:18:21.532403 1391899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11867732.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 13:18:21.547144 1391899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 13:18:21.559394 1391899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:18:21.563333 1391899 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 12:32 /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:18:21.563444 1391899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:18:21.570864 1391899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 13:18:21.591303 1391899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1186773.pem && ln -fs /usr/share/ca-certificates/1186773.pem /etc/ssl/certs/1186773.pem"
	I0127 13:18:21.603902 1391899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1186773.pem
	I0127 13:18:21.608985 1391899 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 12:39 /usr/share/ca-certificates/1186773.pem
	I0127 13:18:21.609082 1391899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1186773.pem
	I0127 13:18:21.617355 1391899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1186773.pem /etc/ssl/certs/51391683.0"
	I0127 13:18:21.627522 1391899 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 13:18:21.631864 1391899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 13:18:21.640220 1391899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 13:18:21.647806 1391899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 13:18:21.657403 1391899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 13:18:21.666114 1391899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 13:18:21.673971 1391899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 13:18:21.681239 1391899 kubeadm.go:392] StartCluster: {Name:old-k8s-version-813213 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-813213 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:18:21.681339 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0127 13:18:21.681408 1391899 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 13:18:21.732524 1391899 cri.go:89] found id: "2a6b3575611924ecc133f42914e9bdfa06e687ead6ff13a333feb19a4af6a6b0"
	I0127 13:18:21.732554 1391899 cri.go:89] found id: "9977eebb81ceefd915fb8ae9adc54fd8743f8453af41100c841a3b59d1f3e72c"
	I0127 13:18:21.732560 1391899 cri.go:89] found id: "8f9ba8ca38617dd4c57c48f007e00144c96edc8377410ac33996e89c6c626ae7"
	I0127 13:18:21.732563 1391899 cri.go:89] found id: "2cbee0b466a6c9c956ae17e795d58e2af64abe7cc2a2822686196036bfcbe2e6"
	I0127 13:18:21.732566 1391899 cri.go:89] found id: "dd1129a7857e46456ebb67cbdb035eeee9a90ede69ebab5467267e962c2ff88e"
	I0127 13:18:21.732570 1391899 cri.go:89] found id: "4b2832f8237f04564577430293388c25c8df7894084ea4413f6ef816262c7bbc"
	I0127 13:18:21.732573 1391899 cri.go:89] found id: "fca0636811bd616b25b61e58c05bade814ad56fe3e02bfa2b5cfa584cbec2b13"
	I0127 13:18:21.732584 1391899 cri.go:89] found id: "dbf3fe12cc514cdb259c989c31b00ebdd7cf91640d7cf28f457a8a8ee0d4a0ba"
	I0127 13:18:21.732589 1391899 cri.go:89] found id: "f5b735d4310b352e1f2a8c5e9bc269f963d80c5a3e1cc6a8016c1873e8e598a5"
	I0127 13:18:21.732597 1391899 cri.go:89] found id: ""
	I0127 13:18:21.732650 1391899 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0127 13:18:21.749320 1391899 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-27T13:18:21Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0127 13:18:21.749414 1391899 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 13:18:21.761206 1391899 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 13:18:21.761227 1391899 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 13:18:21.761281 1391899 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 13:18:21.772201 1391899 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 13:18:21.772663 1391899 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-813213" does not appear in /home/jenkins/minikube-integration/20317-1181389/kubeconfig
	I0127 13:18:21.772774 1391899 kubeconfig.go:62] /home/jenkins/minikube-integration/20317-1181389/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-813213" cluster setting kubeconfig missing "old-k8s-version-813213" context setting]
	I0127 13:18:21.773092 1391899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-1181389/kubeconfig: {Name:mk592f9fdf35ac90774b473f4b93a1c13d4536fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:18:21.774350 1391899 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 13:18:21.785760 1391899 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I0127 13:18:21.785806 1391899 kubeadm.go:597] duration metric: took 24.563065ms to restartPrimaryControlPlane
	I0127 13:18:21.785817 1391899 kubeadm.go:394] duration metric: took 104.588715ms to StartCluster
	I0127 13:18:21.785833 1391899 settings.go:142] acquiring lock: {Name:mk65fea0c0d05cbe7dd04ab1bf6947a1297febb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:18:21.785891 1391899 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20317-1181389/kubeconfig
	I0127 13:18:21.786506 1391899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-1181389/kubeconfig: {Name:mk592f9fdf35ac90774b473f4b93a1c13d4536fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:18:21.786688 1391899 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 13:18:21.786983 1391899 config.go:182] Loaded profile config "old-k8s-version-813213": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0127 13:18:21.787030 1391899 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 13:18:21.787102 1391899 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-813213"
	I0127 13:18:21.787119 1391899 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-813213"
	W0127 13:18:21.787129 1391899 addons.go:247] addon storage-provisioner should already be in state true
	I0127 13:18:21.787152 1391899 host.go:66] Checking if "old-k8s-version-813213" exists ...
	I0127 13:18:21.787158 1391899 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-813213"
	I0127 13:18:21.787179 1391899 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-813213"
	I0127 13:18:21.787477 1391899 cli_runner.go:164] Run: docker container inspect old-k8s-version-813213 --format={{.State.Status}}
	I0127 13:18:21.787613 1391899 cli_runner.go:164] Run: docker container inspect old-k8s-version-813213 --format={{.State.Status}}
	I0127 13:18:21.791474 1391899 addons.go:69] Setting dashboard=true in profile "old-k8s-version-813213"
	I0127 13:18:21.791505 1391899 addons.go:238] Setting addon dashboard=true in "old-k8s-version-813213"
	W0127 13:18:21.791513 1391899 addons.go:247] addon dashboard should already be in state true
	I0127 13:18:21.791551 1391899 host.go:66] Checking if "old-k8s-version-813213" exists ...
	I0127 13:18:21.792025 1391899 cli_runner.go:164] Run: docker container inspect old-k8s-version-813213 --format={{.State.Status}}
	I0127 13:18:21.792180 1391899 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-813213"
	I0127 13:18:21.792192 1391899 addons.go:238] Setting addon metrics-server=true in "old-k8s-version-813213"
	W0127 13:18:21.792198 1391899 addons.go:247] addon metrics-server should already be in state true
	I0127 13:18:21.792220 1391899 host.go:66] Checking if "old-k8s-version-813213" exists ...
	I0127 13:18:21.792629 1391899 cli_runner.go:164] Run: docker container inspect old-k8s-version-813213 --format={{.State.Status}}
	I0127 13:18:21.793553 1391899 out.go:177] * Verifying Kubernetes components...
	I0127 13:18:21.801229 1391899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:18:21.847138 1391899 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-813213"
	W0127 13:18:21.847211 1391899 addons.go:247] addon default-storageclass should already be in state true
	I0127 13:18:21.847266 1391899 host.go:66] Checking if "old-k8s-version-813213" exists ...
	I0127 13:18:21.847825 1391899 cli_runner.go:164] Run: docker container inspect old-k8s-version-813213 --format={{.State.Status}}
	I0127 13:18:21.856022 1391899 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:18:21.858976 1391899 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:18:21.859000 1391899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 13:18:21.859068 1391899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-813213
	I0127 13:18:21.869118 1391899 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 13:18:21.877979 1391899 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 13:18:21.880689 1391899 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 13:18:21.880716 1391899 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 13:18:21.880783 1391899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-813213
	I0127 13:18:21.881101 1391899 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 13:18:21.885114 1391899 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 13:18:21.885141 1391899 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 13:18:21.885209 1391899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-813213
	I0127 13:18:21.921571 1391899 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 13:18:21.921590 1391899 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 13:18:21.921653 1391899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-813213
	I0127 13:18:21.925076 1391899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/20317-1181389/.minikube/machines/old-k8s-version-813213/id_rsa Username:docker}
	I0127 13:18:21.973202 1391899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/20317-1181389/.minikube/machines/old-k8s-version-813213/id_rsa Username:docker}
	I0127 13:18:21.980900 1391899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/20317-1181389/.minikube/machines/old-k8s-version-813213/id_rsa Username:docker}
	I0127 13:18:21.984807 1391899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/20317-1181389/.minikube/machines/old-k8s-version-813213/id_rsa Username:docker}
	I0127 13:18:22.025688 1391899 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:18:22.073153 1391899 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-813213" to be "Ready" ...
	I0127 13:18:22.134540 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:18:22.221781 1391899 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 13:18:22.221842 1391899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 13:18:22.242405 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 13:18:22.262820 1391899 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 13:18:22.262848 1391899 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 13:18:22.370327 1391899 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 13:18:22.370357 1391899 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 13:18:22.432937 1391899 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 13:18:22.432966 1391899 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 13:18:22.481835 1391899 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:18:22.481866 1391899 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	W0127 13:18:22.514178 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:22.514222 1391899 retry.go:31] will retry after 371.986766ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0127 13:18:22.514297 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:22.514310 1391899 retry.go:31] will retry after 132.374168ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:22.519845 1391899 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 13:18:22.519873 1391899 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 13:18:22.544549 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:18:22.548375 1391899 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 13:18:22.548401 1391899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 13:18:22.591235 1391899 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 13:18:22.591264 1391899 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 13:18:22.642244 1391899 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 13:18:22.642277 1391899 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 13:18:22.647478 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0127 13:18:22.674743 1391899 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 13:18:22.674769 1391899 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W0127 13:18:22.769596 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:22.769640 1391899 retry.go:31] will retry after 221.181127ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:22.775214 1391899 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 13:18:22.775240 1391899 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W0127 13:18:22.815722 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:22.815755 1391899 retry.go:31] will retry after 525.933911ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:22.816354 1391899 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:18:22.816383 1391899 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 13:18:22.838875 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:18:22.887047 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0127 13:18:22.986541 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:22.986583 1391899 retry.go:31] will retry after 345.413306ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:22.991919 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0127 13:18:23.003252 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:23.003293 1391899 retry.go:31] will retry after 469.093804ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0127 13:18:23.071095 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:23.071128 1391899 retry.go:31] will retry after 456.595826ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:23.333084 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:18:23.342427 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0127 13:18:23.458383 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:23.458464 1391899 retry.go:31] will retry after 487.031074ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:23.472584 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0127 13:18:23.499436 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:23.499514 1391899 retry.go:31] will retry after 365.36057ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:23.528630 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0127 13:18:23.558505 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:23.558594 1391899 retry.go:31] will retry after 486.935563ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0127 13:18:23.612131 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:23.612166 1391899 retry.go:31] will retry after 447.709657ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:23.865847 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0127 13:18:23.942965 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:23.942997 1391899 retry.go:31] will retry after 986.4987ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:23.946119 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0127 13:18:24.019401 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:24.019487 1391899 retry.go:31] will retry after 570.089089ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:24.046696 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:18:24.060095 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:18:24.074190 1391899 node_ready.go:53] error getting node "old-k8s-version-813213": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-813213": dial tcp 192.168.76.2:8443: connect: connection refused
	W0127 13:18:24.156609 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:24.156704 1391899 retry.go:31] will retry after 1.164313936s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0127 13:18:24.173550 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:24.173587 1391899 retry.go:31] will retry after 456.559808ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:24.590593 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:18:24.630504 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0127 13:18:24.683816 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:24.683910 1391899 retry.go:31] will retry after 846.273649ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0127 13:18:24.730383 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:24.730416 1391899 retry.go:31] will retry after 1.83841666s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:24.930556 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0127 13:18:25.022371 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:25.022407 1391899 retry.go:31] will retry after 1.62228137s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:25.321247 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0127 13:18:25.461529 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:25.461563 1391899 retry.go:31] will retry after 1.585764216s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:25.532489 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0127 13:18:25.684236 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:25.684272 1391899 retry.go:31] will retry after 765.340172ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:26.450751 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:18:26.569021 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:18:26.574622 1391899 node_ready.go:53] error getting node "old-k8s-version-813213": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-813213": dial tcp 192.168.76.2:8443: connect: connection refused
	W0127 13:18:26.584123 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:26.584159 1391899 retry.go:31] will retry after 2.709365195s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:26.645498 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0127 13:18:26.726863 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:26.726894 1391899 retry.go:31] will retry after 1.411182598s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0127 13:18:26.811512 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:26.811546 1391899 retry.go:31] will retry after 1.224324798s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:27.047943 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0127 13:18:27.184126 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:27.184164 1391899 retry.go:31] will retry after 2.443074526s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:28.036806 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0127 13:18:28.138908 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0127 13:18:28.178121 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:28.178163 1391899 retry.go:31] will retry after 3.72387347s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0127 13:18:28.245652 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:28.245685 1391899 retry.go:31] will retry after 3.277610879s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:29.073867 1391899 node_ready.go:53] error getting node "old-k8s-version-813213": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-813213": dial tcp 192.168.76.2:8443: connect: connection refused
	I0127 13:18:29.294230 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0127 13:18:29.386769 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:29.386807 1391899 retry.go:31] will retry after 1.487273331s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:29.627592 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0127 13:18:29.715943 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:29.715973 1391899 retry.go:31] will retry after 3.225684221s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 13:18:30.875053 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:18:31.524011 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:18:31.902602 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0127 13:18:32.942230 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:18:37.990947 1391899 node_ready.go:49] node "old-k8s-version-813213" has status "Ready":"True"
	I0127 13:18:37.990969 1391899 node_ready.go:38] duration metric: took 15.917732997s for node "old-k8s-version-813213" to be "Ready" ...
	I0127 13:18:37.990981 1391899 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:18:38.186022 1391899 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-2phj4" in "kube-system" namespace to be "Ready" ...
	I0127 13:18:38.445834 1391899 pod_ready.go:93] pod "coredns-74ff55c5b-2phj4" in "kube-system" namespace has status "Ready":"True"
	I0127 13:18:38.445912 1391899 pod_ready.go:82] duration metric: took 259.799034ms for pod "coredns-74ff55c5b-2phj4" in "kube-system" namespace to be "Ready" ...
	I0127 13:18:38.445941 1391899 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-813213" in "kube-system" namespace to be "Ready" ...
	I0127 13:18:38.602382 1391899 pod_ready.go:93] pod "etcd-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"True"
	I0127 13:18:38.602458 1391899 pod_ready.go:82] duration metric: took 156.496199ms for pod "etcd-old-k8s-version-813213" in "kube-system" namespace to be "Ready" ...
	I0127 13:18:38.602487 1391899 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-813213" in "kube-system" namespace to be "Ready" ...
	I0127 13:18:38.687674 1391899 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"True"
	I0127 13:18:38.687746 1391899 pod_ready.go:82] duration metric: took 85.233342ms for pod "kube-apiserver-old-k8s-version-813213" in "kube-system" namespace to be "Ready" ...
	I0127 13:18:38.687774 1391899 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace to be "Ready" ...
	I0127 13:18:40.373077 1391899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.497978147s)
	I0127 13:18:40.373298 1391899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (8.470660537s)
	I0127 13:18:40.373352 1391899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.431095243s)
	I0127 13:18:40.373249 1391899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.849200936s)
	I0127 13:18:40.373549 1391899 addons.go:479] Verifying addon metrics-server=true in "old-k8s-version-813213"
	I0127 13:18:40.376620 1391899 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-813213 addons enable metrics-server
	
	I0127 13:18:40.383402 1391899 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0127 13:18:40.386249 1391899 addons.go:514] duration metric: took 18.599198833s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0127 13:18:40.695578 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
	I0127 13:18:43.194602 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
	I0127 13:18:45.195216 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
	I0127 13:18:47.703148 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
	I0127 13:18:50.195122 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
	I0127 13:18:52.198218 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
	I0127 13:18:54.717913 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
	I0127 13:18:57.195944 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
	I0127 13:18:59.702848 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
	I0127 13:19:01.705357 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
	I0127 13:19:04.195657 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
	I0127 13:19:06.710345 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
	I0127 13:19:09.195156 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
	I0127 13:19:11.716563 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
	I0127 13:19:13.729824 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
	I0127 13:19:15.734665 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
	I0127 13:19:18.195277 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
	I0127 13:19:20.195878 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
	I0127 13:19:22.702107 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
	I0127 13:19:24.708578 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
	I0127 13:19:27.195004 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
	I0127 13:19:29.698045 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
	I0127 13:19:31.723407 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
	I0127 13:19:34.195461 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
	I0127 13:19:36.696193 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
	I0127 13:19:38.700026 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
	I0127 13:19:40.700671 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
	I0127 13:19:42.703385 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
	I0127 13:19:45.198443 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
	I0127 13:19:47.701619 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
	I0127 13:19:50.195921 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
	I0127 13:19:52.702269 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
	I0127 13:19:55.194404 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
	I0127 13:19:57.197603 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
	I0127 13:19:59.198142 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
	I0127 13:20:01.703694 1391899 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"True"
	I0127 13:20:01.703725 1391899 pod_ready.go:82] duration metric: took 1m23.015929707s for pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace to be "Ready" ...
	I0127 13:20:01.703742 1391899 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8gl5q" in "kube-system" namespace to be "Ready" ...
	I0127 13:20:01.719573 1391899 pod_ready.go:93] pod "kube-proxy-8gl5q" in "kube-system" namespace has status "Ready":"True"
	I0127 13:20:01.719606 1391899 pod_ready.go:82] duration metric: took 15.853882ms for pod "kube-proxy-8gl5q" in "kube-system" namespace to be "Ready" ...
	I0127 13:20:01.719619 1391899 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-813213" in "kube-system" namespace to be "Ready" ...
	I0127 13:20:01.725917 1391899 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"True"
	I0127 13:20:01.725949 1391899 pod_ready.go:82] duration metric: took 6.319702ms for pod "kube-scheduler-old-k8s-version-813213" in "kube-system" namespace to be "Ready" ...
	I0127 13:20:01.725991 1391899 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace to be "Ready" ...
	I0127 13:20:03.734623 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:20:06.232985 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:20:08.732495 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:20:11.233232 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:20:13.732159 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:20:15.733675 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:20:18.232203 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:20:20.232685 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:20:22.732265 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:20:24.732863 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:20:27.233069 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:20:29.732704 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:20:31.733718 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:20:33.736633 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:20:36.233201 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:20:38.731490 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:20:40.732887 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:20:43.232275 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:20:45.236944 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:20:47.732065 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:20:50.232010 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:20:52.232448 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:20:54.232827 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:20:56.732053 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:20:59.232264 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:21:01.233320 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:21:03.737362 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:21:06.231441 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:21:08.232882 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:21:10.732077 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:21:13.233948 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:21:15.732599 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:21:18.231469 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:21:20.232103 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:21:22.232453 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:21:24.237017 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:21:26.731921 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:21:28.732277 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:21:30.732821 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:21:33.233328 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:21:35.733530 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:21:38.232585 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:21:40.232822 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:21:42.233527 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:21:44.238453 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:21:46.732257 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:21:48.732687 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:21:51.233168 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:21:53.736801 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:21:56.232874 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:21:58.732509 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:22:00.737131 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:22:03.232057 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:22:05.232168 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:22:07.232237 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:22:09.233131 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:22:11.732796 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:22:13.733519 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:22:16.233219 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:22:18.731601 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:22:20.732672 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:22:22.732940 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:22:25.232064 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:22:27.232892 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:22:29.733136 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:22:32.233322 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:22:34.732891 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:22:37.232324 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:22:39.233286 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:22:41.732415 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:22:43.735768 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:22:46.233227 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:22:48.732309 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:22:50.732483 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:22:53.232717 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:22:55.732466 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:22:57.734556 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:23:00.277950 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:23:02.731248 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:23:04.745820 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:23:07.231990 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:23:09.732045 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:23:12.232594 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:23:14.233202 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:23:16.233583 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:23:18.732156 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:23:20.734939 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:23:23.234544 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:23:25.731660 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:23:27.733192 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:23:30.232285 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:23:32.233139 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:23:34.731625 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:23:37.232304 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:23:39.233314 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:23:41.732103 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:23:43.732236 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:23:45.732819 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:23:48.232215 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:23:50.232271 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:23:52.232582 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:23:54.232901 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:23:56.233125 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:23:58.733645 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:24:01.235465 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
	I0127 13:24:01.726375 1391899 pod_ready.go:82] duration metric: took 4m0.000363583s for pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace to be "Ready" ...
	E0127 13:24:01.726466 1391899 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0127 13:24:01.726482 1391899 pod_ready.go:39] duration metric: took 5m23.735489594s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:24:01.726533 1391899 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:24:01.726612 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:24:01.726717 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:24:01.764560 1391899 cri.go:89] found id: "9dc682ca643e0a8340eec4be4ed13358824df730d5eace4f73db56ba726b7da7"
	I0127 13:24:01.764749 1391899 cri.go:89] found id: "dbf3fe12cc514cdb259c989c31b00ebdd7cf91640d7cf28f457a8a8ee0d4a0ba"
	I0127 13:24:01.764845 1391899 cri.go:89] found id: ""
	I0127 13:24:01.764872 1391899 logs.go:282] 2 containers: [9dc682ca643e0a8340eec4be4ed13358824df730d5eace4f73db56ba726b7da7 dbf3fe12cc514cdb259c989c31b00ebdd7cf91640d7cf28f457a8a8ee0d4a0ba]
	I0127 13:24:01.764982 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:01.769278 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:01.773146 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0127 13:24:01.773218 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:24:01.818544 1391899 cri.go:89] found id: "207271fc8e8b70d52e363f5f48188c01f5185e49f7efced711705901058b9f7f"
	I0127 13:24:01.818568 1391899 cri.go:89] found id: "f5b735d4310b352e1f2a8c5e9bc269f963d80c5a3e1cc6a8016c1873e8e598a5"
	I0127 13:24:01.818574 1391899 cri.go:89] found id: ""
	I0127 13:24:01.818581 1391899 logs.go:282] 2 containers: [207271fc8e8b70d52e363f5f48188c01f5185e49f7efced711705901058b9f7f f5b735d4310b352e1f2a8c5e9bc269f963d80c5a3e1cc6a8016c1873e8e598a5]
	I0127 13:24:01.818652 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:01.822831 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:01.826198 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0127 13:24:01.826281 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:24:01.865471 1391899 cri.go:89] found id: "6987b703de853a384b6187fa686ac707375e62efa22618ca57d58b0628846579"
	I0127 13:24:01.865537 1391899 cri.go:89] found id: "9977eebb81ceefd915fb8ae9adc54fd8743f8453af41100c841a3b59d1f3e72c"
	I0127 13:24:01.865557 1391899 cri.go:89] found id: ""
	I0127 13:24:01.865580 1391899 logs.go:282] 2 containers: [6987b703de853a384b6187fa686ac707375e62efa22618ca57d58b0628846579 9977eebb81ceefd915fb8ae9adc54fd8743f8453af41100c841a3b59d1f3e72c]
	I0127 13:24:01.865647 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:01.873778 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:01.878357 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:24:01.878469 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:24:01.919284 1391899 cri.go:89] found id: "498c359719101282d021d8fef3d5d3ebaf865f947cfaafe3e0ea022abbc87f53"
	I0127 13:24:01.919308 1391899 cri.go:89] found id: "4b2832f8237f04564577430293388c25c8df7894084ea4413f6ef816262c7bbc"
	I0127 13:24:01.919312 1391899 cri.go:89] found id: ""
	I0127 13:24:01.919320 1391899 logs.go:282] 2 containers: [498c359719101282d021d8fef3d5d3ebaf865f947cfaafe3e0ea022abbc87f53 4b2832f8237f04564577430293388c25c8df7894084ea4413f6ef816262c7bbc]
	I0127 13:24:01.919395 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:01.922958 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:01.926473 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:24:01.926545 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:24:01.969401 1391899 cri.go:89] found id: "53392608d921e89b00a7a95975a674e420d59a11fbdfeb24bfb54ebe456ab676"
	I0127 13:24:01.969467 1391899 cri.go:89] found id: "2cbee0b466a6c9c956ae17e795d58e2af64abe7cc2a2822686196036bfcbe2e6"
	I0127 13:24:01.969485 1391899 cri.go:89] found id: ""
	I0127 13:24:01.969509 1391899 logs.go:282] 2 containers: [53392608d921e89b00a7a95975a674e420d59a11fbdfeb24bfb54ebe456ab676 2cbee0b466a6c9c956ae17e795d58e2af64abe7cc2a2822686196036bfcbe2e6]
	I0127 13:24:01.969583 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:01.973199 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:01.976743 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:24:01.976815 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:24:02.032067 1391899 cri.go:89] found id: "348496f0d58f24df8178fd9da80e0b658cd42c30cb16a5c7f5b4fed5e96bc2c6"
	I0127 13:24:02.032090 1391899 cri.go:89] found id: "fca0636811bd616b25b61e58c05bade814ad56fe3e02bfa2b5cfa584cbec2b13"
	I0127 13:24:02.032096 1391899 cri.go:89] found id: ""
	I0127 13:24:02.032103 1391899 logs.go:282] 2 containers: [348496f0d58f24df8178fd9da80e0b658cd42c30cb16a5c7f5b4fed5e96bc2c6 fca0636811bd616b25b61e58c05bade814ad56fe3e02bfa2b5cfa584cbec2b13]
	I0127 13:24:02.032162 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:02.036128 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:02.039776 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0127 13:24:02.039886 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:24:02.091708 1391899 cri.go:89] found id: "98357c86477c2e61a73e06a97b5bda6b521689ebe81b9245a2adf7c79daceef7"
	I0127 13:24:02.091730 1391899 cri.go:89] found id: "8f9ba8ca38617dd4c57c48f007e00144c96edc8377410ac33996e89c6c626ae7"
	I0127 13:24:02.091735 1391899 cri.go:89] found id: ""
	I0127 13:24:02.091741 1391899 logs.go:282] 2 containers: [98357c86477c2e61a73e06a97b5bda6b521689ebe81b9245a2adf7c79daceef7 8f9ba8ca38617dd4c57c48f007e00144c96edc8377410ac33996e89c6c626ae7]
	I0127 13:24:02.091855 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:02.095627 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:02.098976 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0127 13:24:02.099051 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0127 13:24:02.141711 1391899 cri.go:89] found id: "ffc35dde525c7cc22c06eeef64a37735aeec2d5e82f29e5ae8ca6de1093abde9"
	I0127 13:24:02.141735 1391899 cri.go:89] found id: "eb3408e2536487de3f2f4d60740cac93d750307a3768323cc24df540e774eb8c"
	I0127 13:24:02.141741 1391899 cri.go:89] found id: ""
	I0127 13:24:02.141756 1391899 logs.go:282] 2 containers: [ffc35dde525c7cc22c06eeef64a37735aeec2d5e82f29e5ae8ca6de1093abde9 eb3408e2536487de3f2f4d60740cac93d750307a3768323cc24df540e774eb8c]
	I0127 13:24:02.141815 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:02.146300 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:02.149876 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:24:02.149945 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:24:02.193944 1391899 cri.go:89] found id: "84b4623c8ca9caacba41fa6535bc6f3dc8e0c52347d8ef68c993a81a7da957d1"
	I0127 13:24:02.194026 1391899 cri.go:89] found id: ""
	I0127 13:24:02.194041 1391899 logs.go:282] 1 containers: [84b4623c8ca9caacba41fa6535bc6f3dc8e0c52347d8ef68c993a81a7da957d1]
	I0127 13:24:02.194119 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:02.198663 1391899 logs.go:123] Gathering logs for dmesg ...
	I0127 13:24:02.198689 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:24:02.216133 1391899 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:24:02.216169 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0127 13:24:02.373008 1391899 logs.go:123] Gathering logs for kube-apiserver [dbf3fe12cc514cdb259c989c31b00ebdd7cf91640d7cf28f457a8a8ee0d4a0ba] ...
	I0127 13:24:02.373056 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbf3fe12cc514cdb259c989c31b00ebdd7cf91640d7cf28f457a8a8ee0d4a0ba"
	I0127 13:24:02.431971 1391899 logs.go:123] Gathering logs for coredns [9977eebb81ceefd915fb8ae9adc54fd8743f8453af41100c841a3b59d1f3e72c] ...
	I0127 13:24:02.432005 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9977eebb81ceefd915fb8ae9adc54fd8743f8453af41100c841a3b59d1f3e72c"
	I0127 13:24:02.475356 1391899 logs.go:123] Gathering logs for storage-provisioner [ffc35dde525c7cc22c06eeef64a37735aeec2d5e82f29e5ae8ca6de1093abde9] ...
	I0127 13:24:02.475383 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffc35dde525c7cc22c06eeef64a37735aeec2d5e82f29e5ae8ca6de1093abde9"
	I0127 13:24:02.514117 1391899 logs.go:123] Gathering logs for storage-provisioner [eb3408e2536487de3f2f4d60740cac93d750307a3768323cc24df540e774eb8c] ...
	I0127 13:24:02.514145 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb3408e2536487de3f2f4d60740cac93d750307a3768323cc24df540e774eb8c"
	I0127 13:24:02.561620 1391899 logs.go:123] Gathering logs for kubernetes-dashboard [84b4623c8ca9caacba41fa6535bc6f3dc8e0c52347d8ef68c993a81a7da957d1] ...
	I0127 13:24:02.561649 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84b4623c8ca9caacba41fa6535bc6f3dc8e0c52347d8ef68c993a81a7da957d1"
	I0127 13:24:02.602626 1391899 logs.go:123] Gathering logs for kubelet ...
	I0127 13:24:02.602653 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0127 13:24:02.668629 1391899 logs.go:138] Found kubelet problem: Jan 27 13:18:40 old-k8s-version-813213 kubelet[662]: E0127 13:18:40.161443     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 13:24:02.670001 1391899 logs.go:138] Found kubelet problem: Jan 27 13:18:40 old-k8s-version-813213 kubelet[662]: E0127 13:18:40.742912     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:02.673484 1391899 logs.go:138] Found kubelet problem: Jan 27 13:18:52 old-k8s-version-813213 kubelet[662]: E0127 13:18:52.582413     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 13:24:02.675905 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:00 old-k8s-version-813213 kubelet[662]: E0127 13:19:00.834386     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:02.676454 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:01 old-k8s-version-813213 kubelet[662]: E0127 13:19:01.844499     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:02.676766 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:04 old-k8s-version-813213 kubelet[662]: E0127 13:19:04.569022     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:02.677317 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:08 old-k8s-version-813213 kubelet[662]: E0127 13:19:08.040193     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:02.678628 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:10 old-k8s-version-813213 kubelet[662]: E0127 13:19:10.866611     662 pod_workers.go:191] Error syncing pod b3ee3aee-1b8f-4040-9cbf-f87cb41abfd5 ("storage-provisioner_kube-system(b3ee3aee-1b8f-4040-9cbf-f87cb41abfd5)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b3ee3aee-1b8f-4040-9cbf-f87cb41abfd5)"
	W0127 13:24:02.682650 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:16 old-k8s-version-813213 kubelet[662]: E0127 13:19:16.578183     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 13:24:02.683806 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:21 old-k8s-version-813213 kubelet[662]: E0127 13:19:21.914690     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:02.684306 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:28 old-k8s-version-813213 kubelet[662]: E0127 13:19:28.040605     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:02.684519 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:31 old-k8s-version-813213 kubelet[662]: E0127 13:19:31.569400     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:02.684888 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:40 old-k8s-version-813213 kubelet[662]: E0127 13:19:40.569017     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:02.685153 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:46 old-k8s-version-813213 kubelet[662]: E0127 13:19:46.569235     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:02.685873 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:54 old-k8s-version-813213 kubelet[662]: E0127 13:19:54.998749     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:02.686236 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:58 old-k8s-version-813213 kubelet[662]: E0127 13:19:58.040157     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:02.688891 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:00 old-k8s-version-813213 kubelet[662]: E0127 13:20:00.591029     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 13:24:02.689278 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:08 old-k8s-version-813213 kubelet[662]: E0127 13:20:08.568750     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:02.689490 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:14 old-k8s-version-813213 kubelet[662]: E0127 13:20:14.569356     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:02.689843 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:20 old-k8s-version-813213 kubelet[662]: E0127 13:20:20.568757     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:02.690056 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:25 old-k8s-version-813213 kubelet[662]: E0127 13:20:25.569626     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:02.690407 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:33 old-k8s-version-813213 kubelet[662]: E0127 13:20:33.569406     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:02.690618 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:37 old-k8s-version-813213 kubelet[662]: E0127 13:20:37.569368     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:02.691265 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:45 old-k8s-version-813213 kubelet[662]: E0127 13:20:45.161804     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:02.691625 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:48 old-k8s-version-813213 kubelet[662]: E0127 13:20:48.040150     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:02.691850 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:51 old-k8s-version-813213 kubelet[662]: E0127 13:20:51.569490     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:02.692202 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:02 old-k8s-version-813213 kubelet[662]: E0127 13:21:02.568801     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:02.692408 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:04 old-k8s-version-813213 kubelet[662]: E0127 13:21:04.569222     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:02.692619 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:15 old-k8s-version-813213 kubelet[662]: E0127 13:21:15.569338     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:02.692971 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:16 old-k8s-version-813213 kubelet[662]: E0127 13:21:16.568781     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:02.693356 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:30 old-k8s-version-813213 kubelet[662]: E0127 13:21:30.569462     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:02.695822 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:30 old-k8s-version-813213 kubelet[662]: E0127 13:21:30.578384     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 13:24:02.696186 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:41 old-k8s-version-813213 kubelet[662]: E0127 13:21:41.569455     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:02.696395 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:42 old-k8s-version-813213 kubelet[662]: E0127 13:21:42.569422     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:02.696746 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:52 old-k8s-version-813213 kubelet[662]: E0127 13:21:52.568642     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:02.696954 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:57 old-k8s-version-813213 kubelet[662]: E0127 13:21:57.570339     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:02.697320 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:03 old-k8s-version-813213 kubelet[662]: E0127 13:22:03.569863     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:02.697530 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:12 old-k8s-version-813213 kubelet[662]: E0127 13:22:12.569369     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:02.698242 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:15 old-k8s-version-813213 kubelet[662]: E0127 13:22:15.386979     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:02.698602 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:18 old-k8s-version-813213 kubelet[662]: E0127 13:22:18.040158     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:02.698812 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:26 old-k8s-version-813213 kubelet[662]: E0127 13:22:26.569341     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:02.699199 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:30 old-k8s-version-813213 kubelet[662]: E0127 13:22:30.568782     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:02.699410 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:37 old-k8s-version-813213 kubelet[662]: E0127 13:22:37.572662     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:02.699761 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:41 old-k8s-version-813213 kubelet[662]: E0127 13:22:41.568879     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:02.699978 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:49 old-k8s-version-813213 kubelet[662]: E0127 13:22:49.569740     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:02.700330 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:56 old-k8s-version-813213 kubelet[662]: E0127 13:22:56.568752     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:02.700537 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:03 old-k8s-version-813213 kubelet[662]: E0127 13:23:03.569135     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:02.700905 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:11 old-k8s-version-813213 kubelet[662]: E0127 13:23:11.569770     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:02.701163 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:16 old-k8s-version-813213 kubelet[662]: E0127 13:23:16.569194     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:02.701580 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:26 old-k8s-version-813213 kubelet[662]: E0127 13:23:26.568839     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:02.701771 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:28 old-k8s-version-813213 kubelet[662]: E0127 13:23:28.569744     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:02.702095 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:37 old-k8s-version-813213 kubelet[662]: E0127 13:23:37.568849     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:02.702276 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:41 old-k8s-version-813213 kubelet[662]: E0127 13:23:41.573539     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:02.702598 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:49 old-k8s-version-813213 kubelet[662]: E0127 13:23:49.569246     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:02.702777 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:54 old-k8s-version-813213 kubelet[662]: E0127 13:23:54.569362     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0127 13:24:02.702788 1391899 logs.go:123] Gathering logs for kube-scheduler [498c359719101282d021d8fef3d5d3ebaf865f947cfaafe3e0ea022abbc87f53] ...
	I0127 13:24:02.702805 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 498c359719101282d021d8fef3d5d3ebaf865f947cfaafe3e0ea022abbc87f53"
	I0127 13:24:02.750568 1391899 logs.go:123] Gathering logs for kube-controller-manager [348496f0d58f24df8178fd9da80e0b658cd42c30cb16a5c7f5b4fed5e96bc2c6] ...
	I0127 13:24:02.750648 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 348496f0d58f24df8178fd9da80e0b658cd42c30cb16a5c7f5b4fed5e96bc2c6"
	I0127 13:24:02.818023 1391899 logs.go:123] Gathering logs for kindnet [98357c86477c2e61a73e06a97b5bda6b521689ebe81b9245a2adf7c79daceef7] ...
	I0127 13:24:02.818058 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98357c86477c2e61a73e06a97b5bda6b521689ebe81b9245a2adf7c79daceef7"
	I0127 13:24:02.860506 1391899 logs.go:123] Gathering logs for kube-apiserver [9dc682ca643e0a8340eec4be4ed13358824df730d5eace4f73db56ba726b7da7] ...
	I0127 13:24:02.860534 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9dc682ca643e0a8340eec4be4ed13358824df730d5eace4f73db56ba726b7da7"
	I0127 13:24:02.930144 1391899 logs.go:123] Gathering logs for kube-scheduler [4b2832f8237f04564577430293388c25c8df7894084ea4413f6ef816262c7bbc] ...
	I0127 13:24:02.930197 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b2832f8237f04564577430293388c25c8df7894084ea4413f6ef816262c7bbc"
	I0127 13:24:02.976555 1391899 logs.go:123] Gathering logs for containerd ...
	I0127 13:24:02.976587 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0127 13:24:03.038872 1391899 logs.go:123] Gathering logs for etcd [207271fc8e8b70d52e363f5f48188c01f5185e49f7efced711705901058b9f7f] ...
	I0127 13:24:03.038913 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 207271fc8e8b70d52e363f5f48188c01f5185e49f7efced711705901058b9f7f"
	I0127 13:24:03.089942 1391899 logs.go:123] Gathering logs for coredns [6987b703de853a384b6187fa686ac707375e62efa22618ca57d58b0628846579] ...
	I0127 13:24:03.089974 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6987b703de853a384b6187fa686ac707375e62efa22618ca57d58b0628846579"
	I0127 13:24:03.132438 1391899 logs.go:123] Gathering logs for kube-proxy [53392608d921e89b00a7a95975a674e420d59a11fbdfeb24bfb54ebe456ab676] ...
	I0127 13:24:03.132467 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53392608d921e89b00a7a95975a674e420d59a11fbdfeb24bfb54ebe456ab676"
	I0127 13:24:03.183093 1391899 logs.go:123] Gathering logs for kube-proxy [2cbee0b466a6c9c956ae17e795d58e2af64abe7cc2a2822686196036bfcbe2e6] ...
	I0127 13:24:03.183121 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cbee0b466a6c9c956ae17e795d58e2af64abe7cc2a2822686196036bfcbe2e6"
	I0127 13:24:03.223735 1391899 logs.go:123] Gathering logs for kube-controller-manager [fca0636811bd616b25b61e58c05bade814ad56fe3e02bfa2b5cfa584cbec2b13] ...
	I0127 13:24:03.223763 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fca0636811bd616b25b61e58c05bade814ad56fe3e02bfa2b5cfa584cbec2b13"
	I0127 13:24:03.284290 1391899 logs.go:123] Gathering logs for kindnet [8f9ba8ca38617dd4c57c48f007e00144c96edc8377410ac33996e89c6c626ae7] ...
	I0127 13:24:03.284363 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f9ba8ca38617dd4c57c48f007e00144c96edc8377410ac33996e89c6c626ae7"
	I0127 13:24:03.334184 1391899 logs.go:123] Gathering logs for container status ...
	I0127 13:24:03.334221 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:24:03.377673 1391899 logs.go:123] Gathering logs for etcd [f5b735d4310b352e1f2a8c5e9bc269f963d80c5a3e1cc6a8016c1873e8e598a5] ...
	I0127 13:24:03.377703 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5b735d4310b352e1f2a8c5e9bc269f963d80c5a3e1cc6a8016c1873e8e598a5"
	I0127 13:24:03.418187 1391899 out.go:358] Setting ErrFile to fd 2...
	I0127 13:24:03.418214 1391899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0127 13:24:03.418270 1391899 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0127 13:24:03.418286 1391899 out.go:270]   Jan 27 13:23:28 old-k8s-version-813213 kubelet[662]: E0127 13:23:28.569744     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jan 27 13:23:28 old-k8s-version-813213 kubelet[662]: E0127 13:23:28.569744     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:03.418299 1391899 out.go:270]   Jan 27 13:23:37 old-k8s-version-813213 kubelet[662]: E0127 13:23:37.568849     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	  Jan 27 13:23:37 old-k8s-version-813213 kubelet[662]: E0127 13:23:37.568849     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:03.418306 1391899 out.go:270]   Jan 27 13:23:41 old-k8s-version-813213 kubelet[662]: E0127 13:23:41.573539     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jan 27 13:23:41 old-k8s-version-813213 kubelet[662]: E0127 13:23:41.573539     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:03.418317 1391899 out.go:270]   Jan 27 13:23:49 old-k8s-version-813213 kubelet[662]: E0127 13:23:49.569246     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	  Jan 27 13:23:49 old-k8s-version-813213 kubelet[662]: E0127 13:23:49.569246     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:03.418325 1391899 out.go:270]   Jan 27 13:23:54 old-k8s-version-813213 kubelet[662]: E0127 13:23:54.569362     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jan 27 13:23:54 old-k8s-version-813213 kubelet[662]: E0127 13:23:54.569362     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0127 13:24:03.418337 1391899 out.go:358] Setting ErrFile to fd 2...
	I0127 13:24:03.418343 1391899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:24:13.421468 1391899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:24:13.436058 1391899 api_server.go:72] duration metric: took 5m51.649333782s to wait for apiserver process to appear ...
	I0127 13:24:13.436095 1391899 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:24:13.436141 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:24:13.436204 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:24:13.494095 1391899 cri.go:89] found id: "9dc682ca643e0a8340eec4be4ed13358824df730d5eace4f73db56ba726b7da7"
	I0127 13:24:13.494129 1391899 cri.go:89] found id: "dbf3fe12cc514cdb259c989c31b00ebdd7cf91640d7cf28f457a8a8ee0d4a0ba"
	I0127 13:24:13.494135 1391899 cri.go:89] found id: ""
	I0127 13:24:13.494145 1391899 logs.go:282] 2 containers: [9dc682ca643e0a8340eec4be4ed13358824df730d5eace4f73db56ba726b7da7 dbf3fe12cc514cdb259c989c31b00ebdd7cf91640d7cf28f457a8a8ee0d4a0ba]
	I0127 13:24:13.494216 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:13.498830 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:13.503370 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0127 13:24:13.503440 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:24:13.566783 1391899 cri.go:89] found id: "207271fc8e8b70d52e363f5f48188c01f5185e49f7efced711705901058b9f7f"
	I0127 13:24:13.566804 1391899 cri.go:89] found id: "f5b735d4310b352e1f2a8c5e9bc269f963d80c5a3e1cc6a8016c1873e8e598a5"
	I0127 13:24:13.566809 1391899 cri.go:89] found id: ""
	I0127 13:24:13.566815 1391899 logs.go:282] 2 containers: [207271fc8e8b70d52e363f5f48188c01f5185e49f7efced711705901058b9f7f f5b735d4310b352e1f2a8c5e9bc269f963d80c5a3e1cc6a8016c1873e8e598a5]
	I0127 13:24:13.566884 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:13.571754 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:13.579722 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0127 13:24:13.579801 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:24:13.636050 1391899 cri.go:89] found id: "6987b703de853a384b6187fa686ac707375e62efa22618ca57d58b0628846579"
	I0127 13:24:13.636069 1391899 cri.go:89] found id: "9977eebb81ceefd915fb8ae9adc54fd8743f8453af41100c841a3b59d1f3e72c"
	I0127 13:24:13.636074 1391899 cri.go:89] found id: ""
	I0127 13:24:13.636081 1391899 logs.go:282] 2 containers: [6987b703de853a384b6187fa686ac707375e62efa22618ca57d58b0628846579 9977eebb81ceefd915fb8ae9adc54fd8743f8453af41100c841a3b59d1f3e72c]
	I0127 13:24:13.636140 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:13.641250 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:13.645845 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:24:13.645910 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:24:13.730109 1391899 cri.go:89] found id: "498c359719101282d021d8fef3d5d3ebaf865f947cfaafe3e0ea022abbc87f53"
	I0127 13:24:13.730126 1391899 cri.go:89] found id: "4b2832f8237f04564577430293388c25c8df7894084ea4413f6ef816262c7bbc"
	I0127 13:24:13.730131 1391899 cri.go:89] found id: ""
	I0127 13:24:13.730138 1391899 logs.go:282] 2 containers: [498c359719101282d021d8fef3d5d3ebaf865f947cfaafe3e0ea022abbc87f53 4b2832f8237f04564577430293388c25c8df7894084ea4413f6ef816262c7bbc]
	I0127 13:24:13.730188 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:13.735061 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:13.739961 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:24:13.740030 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:24:13.793549 1391899 cri.go:89] found id: "53392608d921e89b00a7a95975a674e420d59a11fbdfeb24bfb54ebe456ab676"
	I0127 13:24:13.793568 1391899 cri.go:89] found id: "2cbee0b466a6c9c956ae17e795d58e2af64abe7cc2a2822686196036bfcbe2e6"
	I0127 13:24:13.793573 1391899 cri.go:89] found id: ""
	I0127 13:24:13.793580 1391899 logs.go:282] 2 containers: [53392608d921e89b00a7a95975a674e420d59a11fbdfeb24bfb54ebe456ab676 2cbee0b466a6c9c956ae17e795d58e2af64abe7cc2a2822686196036bfcbe2e6]
	I0127 13:24:13.793635 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:13.798974 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:13.803128 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:24:13.803199 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:24:13.865547 1391899 cri.go:89] found id: "348496f0d58f24df8178fd9da80e0b658cd42c30cb16a5c7f5b4fed5e96bc2c6"
	I0127 13:24:13.865586 1391899 cri.go:89] found id: "fca0636811bd616b25b61e58c05bade814ad56fe3e02bfa2b5cfa584cbec2b13"
	I0127 13:24:13.865591 1391899 cri.go:89] found id: ""
	I0127 13:24:13.865597 1391899 logs.go:282] 2 containers: [348496f0d58f24df8178fd9da80e0b658cd42c30cb16a5c7f5b4fed5e96bc2c6 fca0636811bd616b25b61e58c05bade814ad56fe3e02bfa2b5cfa584cbec2b13]
	I0127 13:24:13.865654 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:13.869602 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:13.873071 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0127 13:24:13.873189 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:24:13.920522 1391899 cri.go:89] found id: "98357c86477c2e61a73e06a97b5bda6b521689ebe81b9245a2adf7c79daceef7"
	I0127 13:24:13.920541 1391899 cri.go:89] found id: "8f9ba8ca38617dd4c57c48f007e00144c96edc8377410ac33996e89c6c626ae7"
	I0127 13:24:13.920546 1391899 cri.go:89] found id: ""
	I0127 13:24:13.920553 1391899 logs.go:282] 2 containers: [98357c86477c2e61a73e06a97b5bda6b521689ebe81b9245a2adf7c79daceef7 8f9ba8ca38617dd4c57c48f007e00144c96edc8377410ac33996e89c6c626ae7]
	I0127 13:24:13.920606 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:13.924728 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:13.928717 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0127 13:24:13.928777 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0127 13:24:13.981265 1391899 cri.go:89] found id: "ffc35dde525c7cc22c06eeef64a37735aeec2d5e82f29e5ae8ca6de1093abde9"
	I0127 13:24:13.981289 1391899 cri.go:89] found id: "eb3408e2536487de3f2f4d60740cac93d750307a3768323cc24df540e774eb8c"
	I0127 13:24:13.981294 1391899 cri.go:89] found id: ""
	I0127 13:24:13.981300 1391899 logs.go:282] 2 containers: [ffc35dde525c7cc22c06eeef64a37735aeec2d5e82f29e5ae8ca6de1093abde9 eb3408e2536487de3f2f4d60740cac93d750307a3768323cc24df540e774eb8c]
	I0127 13:24:13.981386 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:13.985260 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:13.988991 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:24:13.989131 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:24:14.041772 1391899 cri.go:89] found id: "84b4623c8ca9caacba41fa6535bc6f3dc8e0c52347d8ef68c993a81a7da957d1"
	I0127 13:24:14.041793 1391899 cri.go:89] found id: ""
	I0127 13:24:14.041801 1391899 logs.go:282] 1 containers: [84b4623c8ca9caacba41fa6535bc6f3dc8e0c52347d8ef68c993a81a7da957d1]
	I0127 13:24:14.041860 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:14.045758 1391899 logs.go:123] Gathering logs for kube-apiserver [dbf3fe12cc514cdb259c989c31b00ebdd7cf91640d7cf28f457a8a8ee0d4a0ba] ...
	I0127 13:24:14.045783 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbf3fe12cc514cdb259c989c31b00ebdd7cf91640d7cf28f457a8a8ee0d4a0ba"
	I0127 13:24:14.119271 1391899 logs.go:123] Gathering logs for coredns [9977eebb81ceefd915fb8ae9adc54fd8743f8453af41100c841a3b59d1f3e72c] ...
	I0127 13:24:14.119329 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9977eebb81ceefd915fb8ae9adc54fd8743f8453af41100c841a3b59d1f3e72c"
	I0127 13:24:14.184713 1391899 logs.go:123] Gathering logs for dmesg ...
	I0127 13:24:14.184744 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:24:14.205804 1391899 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:24:14.205839 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0127 13:24:14.425188 1391899 logs.go:123] Gathering logs for kube-apiserver [9dc682ca643e0a8340eec4be4ed13358824df730d5eace4f73db56ba726b7da7] ...
	I0127 13:24:14.425269 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9dc682ca643e0a8340eec4be4ed13358824df730d5eace4f73db56ba726b7da7"
	I0127 13:24:14.513059 1391899 logs.go:123] Gathering logs for kindnet [8f9ba8ca38617dd4c57c48f007e00144c96edc8377410ac33996e89c6c626ae7] ...
	I0127 13:24:14.513133 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f9ba8ca38617dd4c57c48f007e00144c96edc8377410ac33996e89c6c626ae7"
	I0127 13:24:14.569064 1391899 logs.go:123] Gathering logs for storage-provisioner [eb3408e2536487de3f2f4d60740cac93d750307a3768323cc24df540e774eb8c] ...
	I0127 13:24:14.569092 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb3408e2536487de3f2f4d60740cac93d750307a3768323cc24df540e774eb8c"
	I0127 13:24:14.641486 1391899 logs.go:123] Gathering logs for kubelet ...
	I0127 13:24:14.641555 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0127 13:24:14.715359 1391899 logs.go:138] Found kubelet problem: Jan 27 13:18:40 old-k8s-version-813213 kubelet[662]: E0127 13:18:40.161443     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 13:24:14.716229 1391899 logs.go:138] Found kubelet problem: Jan 27 13:18:40 old-k8s-version-813213 kubelet[662]: E0127 13:18:40.742912     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.719582 1391899 logs.go:138] Found kubelet problem: Jan 27 13:18:52 old-k8s-version-813213 kubelet[662]: E0127 13:18:52.582413     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 13:24:14.721888 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:00 old-k8s-version-813213 kubelet[662]: E0127 13:19:00.834386     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.722259 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:01 old-k8s-version-813213 kubelet[662]: E0127 13:19:01.844499     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.722476 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:04 old-k8s-version-813213 kubelet[662]: E0127 13:19:04.569022     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.722844 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:08 old-k8s-version-813213 kubelet[662]: E0127 13:19:08.040193     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.723653 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:10 old-k8s-version-813213 kubelet[662]: E0127 13:19:10.866611     662 pod_workers.go:191] Error syncing pod b3ee3aee-1b8f-4040-9cbf-f87cb41abfd5 ("storage-provisioner_kube-system(b3ee3aee-1b8f-4040-9cbf-f87cb41abfd5)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b3ee3aee-1b8f-4040-9cbf-f87cb41abfd5)"
	W0127 13:24:14.726364 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:16 old-k8s-version-813213 kubelet[662]: E0127 13:19:16.578183     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 13:24:14.727343 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:21 old-k8s-version-813213 kubelet[662]: E0127 13:19:21.914690     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.727830 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:28 old-k8s-version-813213 kubelet[662]: E0127 13:19:28.040605     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.728048 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:31 old-k8s-version-813213 kubelet[662]: E0127 13:19:31.569400     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.728403 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:40 old-k8s-version-813213 kubelet[662]: E0127 13:19:40.569017     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.728614 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:46 old-k8s-version-813213 kubelet[662]: E0127 13:19:46.569235     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.729266 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:54 old-k8s-version-813213 kubelet[662]: E0127 13:19:54.998749     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.729680 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:58 old-k8s-version-813213 kubelet[662]: E0127 13:19:58.040157     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.732147 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:00 old-k8s-version-813213 kubelet[662]: E0127 13:20:00.591029     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 13:24:14.732514 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:08 old-k8s-version-813213 kubelet[662]: E0127 13:20:08.568750     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.732722 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:14 old-k8s-version-813213 kubelet[662]: E0127 13:20:14.569356     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.733086 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:20 old-k8s-version-813213 kubelet[662]: E0127 13:20:20.568757     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.733291 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:25 old-k8s-version-813213 kubelet[662]: E0127 13:20:25.569626     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.733665 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:33 old-k8s-version-813213 kubelet[662]: E0127 13:20:33.569406     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.733881 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:37 old-k8s-version-813213 kubelet[662]: E0127 13:20:37.569368     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.734571 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:45 old-k8s-version-813213 kubelet[662]: E0127 13:20:45.161804     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.734937 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:48 old-k8s-version-813213 kubelet[662]: E0127 13:20:48.040150     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.735156 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:51 old-k8s-version-813213 kubelet[662]: E0127 13:20:51.569490     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.735527 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:02 old-k8s-version-813213 kubelet[662]: E0127 13:21:02.568801     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.735733 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:04 old-k8s-version-813213 kubelet[662]: E0127 13:21:04.569222     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.735945 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:15 old-k8s-version-813213 kubelet[662]: E0127 13:21:15.569338     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.736307 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:16 old-k8s-version-813213 kubelet[662]: E0127 13:21:16.568781     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.736659 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:30 old-k8s-version-813213 kubelet[662]: E0127 13:21:30.569462     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.739220 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:30 old-k8s-version-813213 kubelet[662]: E0127 13:21:30.578384     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 13:24:14.739590 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:41 old-k8s-version-813213 kubelet[662]: E0127 13:21:41.569455     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.739807 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:42 old-k8s-version-813213 kubelet[662]: E0127 13:21:42.569422     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.740162 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:52 old-k8s-version-813213 kubelet[662]: E0127 13:21:52.568642     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.740367 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:57 old-k8s-version-813213 kubelet[662]: E0127 13:21:57.570339     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.740713 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:03 old-k8s-version-813213 kubelet[662]: E0127 13:22:03.569863     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.740923 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:12 old-k8s-version-813213 kubelet[662]: E0127 13:22:12.569369     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.741548 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:15 old-k8s-version-813213 kubelet[662]: E0127 13:22:15.386979     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.741905 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:18 old-k8s-version-813213 kubelet[662]: E0127 13:22:18.040158     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.742108 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:26 old-k8s-version-813213 kubelet[662]: E0127 13:22:26.569341     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.742460 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:30 old-k8s-version-813213 kubelet[662]: E0127 13:22:30.568782     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.742682 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:37 old-k8s-version-813213 kubelet[662]: E0127 13:22:37.572662     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.743095 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:41 old-k8s-version-813213 kubelet[662]: E0127 13:22:41.568879     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.743281 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:49 old-k8s-version-813213 kubelet[662]: E0127 13:22:49.569740     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.743626 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:56 old-k8s-version-813213 kubelet[662]: E0127 13:22:56.568752     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.743813 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:03 old-k8s-version-813213 kubelet[662]: E0127 13:23:03.569135     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.744134 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:11 old-k8s-version-813213 kubelet[662]: E0127 13:23:11.569770     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.744341 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:16 old-k8s-version-813213 kubelet[662]: E0127 13:23:16.569194     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.744684 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:26 old-k8s-version-813213 kubelet[662]: E0127 13:23:26.568839     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.744893 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:28 old-k8s-version-813213 kubelet[662]: E0127 13:23:28.569744     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.745252 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:37 old-k8s-version-813213 kubelet[662]: E0127 13:23:37.568849     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.745454 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:41 old-k8s-version-813213 kubelet[662]: E0127 13:23:41.573539     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.745819 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:49 old-k8s-version-813213 kubelet[662]: E0127 13:23:49.569246     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.746065 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:54 old-k8s-version-813213 kubelet[662]: E0127 13:23:54.569362     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.746417 1391899 logs.go:138] Found kubelet problem: Jan 27 13:24:03 old-k8s-version-813213 kubelet[662]: E0127 13:24:03.568788     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.746624 1391899 logs.go:138] Found kubelet problem: Jan 27 13:24:05 old-k8s-version-813213 kubelet[662]: E0127 13:24:05.569484     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.746985 1391899 logs.go:138] Found kubelet problem: Jan 27 13:24:14 old-k8s-version-813213 kubelet[662]: E0127 13:24:14.569118     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	I0127 13:24:14.747000 1391899 logs.go:123] Gathering logs for coredns [6987b703de853a384b6187fa686ac707375e62efa22618ca57d58b0628846579] ...
	I0127 13:24:14.747029 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6987b703de853a384b6187fa686ac707375e62efa22618ca57d58b0628846579"
	I0127 13:24:14.802969 1391899 logs.go:123] Gathering logs for kube-proxy [2cbee0b466a6c9c956ae17e795d58e2af64abe7cc2a2822686196036bfcbe2e6] ...
	I0127 13:24:14.802997 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cbee0b466a6c9c956ae17e795d58e2af64abe7cc2a2822686196036bfcbe2e6"
	I0127 13:24:14.885354 1391899 logs.go:123] Gathering logs for kube-proxy [53392608d921e89b00a7a95975a674e420d59a11fbdfeb24bfb54ebe456ab676] ...
	I0127 13:24:14.885377 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53392608d921e89b00a7a95975a674e420d59a11fbdfeb24bfb54ebe456ab676"
	I0127 13:24:14.943705 1391899 logs.go:123] Gathering logs for kube-controller-manager [348496f0d58f24df8178fd9da80e0b658cd42c30cb16a5c7f5b4fed5e96bc2c6] ...
	I0127 13:24:14.943731 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 348496f0d58f24df8178fd9da80e0b658cd42c30cb16a5c7f5b4fed5e96bc2c6"
	I0127 13:24:15.004077 1391899 logs.go:123] Gathering logs for storage-provisioner [ffc35dde525c7cc22c06eeef64a37735aeec2d5e82f29e5ae8ca6de1093abde9] ...
	I0127 13:24:15.004163 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffc35dde525c7cc22c06eeef64a37735aeec2d5e82f29e5ae8ca6de1093abde9"
	I0127 13:24:15.066986 1391899 logs.go:123] Gathering logs for kubernetes-dashboard [84b4623c8ca9caacba41fa6535bc6f3dc8e0c52347d8ef68c993a81a7da957d1] ...
	I0127 13:24:15.067101 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84b4623c8ca9caacba41fa6535bc6f3dc8e0c52347d8ef68c993a81a7da957d1"
	I0127 13:24:15.157150 1391899 logs.go:123] Gathering logs for etcd [207271fc8e8b70d52e363f5f48188c01f5185e49f7efced711705901058b9f7f] ...
	I0127 13:24:15.157183 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 207271fc8e8b70d52e363f5f48188c01f5185e49f7efced711705901058b9f7f"
	I0127 13:24:15.234051 1391899 logs.go:123] Gathering logs for etcd [f5b735d4310b352e1f2a8c5e9bc269f963d80c5a3e1cc6a8016c1873e8e598a5] ...
	I0127 13:24:15.234091 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5b735d4310b352e1f2a8c5e9bc269f963d80c5a3e1cc6a8016c1873e8e598a5"
	I0127 13:24:15.331724 1391899 logs.go:123] Gathering logs for kube-scheduler [4b2832f8237f04564577430293388c25c8df7894084ea4413f6ef816262c7bbc] ...
	I0127 13:24:15.331918 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b2832f8237f04564577430293388c25c8df7894084ea4413f6ef816262c7bbc"
	I0127 13:24:15.411973 1391899 logs.go:123] Gathering logs for containerd ...
	I0127 13:24:15.412006 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0127 13:24:15.508734 1391899 logs.go:123] Gathering logs for container status ...
	I0127 13:24:15.508770 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:24:15.593697 1391899 logs.go:123] Gathering logs for kube-scheduler [498c359719101282d021d8fef3d5d3ebaf865f947cfaafe3e0ea022abbc87f53] ...
	I0127 13:24:15.593769 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 498c359719101282d021d8fef3d5d3ebaf865f947cfaafe3e0ea022abbc87f53"
	I0127 13:24:15.652854 1391899 logs.go:123] Gathering logs for kube-controller-manager [fca0636811bd616b25b61e58c05bade814ad56fe3e02bfa2b5cfa584cbec2b13] ...
	I0127 13:24:15.652938 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fca0636811bd616b25b61e58c05bade814ad56fe3e02bfa2b5cfa584cbec2b13"
	I0127 13:24:15.783362 1391899 logs.go:123] Gathering logs for kindnet [98357c86477c2e61a73e06a97b5bda6b521689ebe81b9245a2adf7c79daceef7] ...
	I0127 13:24:15.783455 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98357c86477c2e61a73e06a97b5bda6b521689ebe81b9245a2adf7c79daceef7"
	I0127 13:24:15.840977 1391899 out.go:358] Setting ErrFile to fd 2...
	I0127 13:24:15.841062 1391899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0127 13:24:15.841144 1391899 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0127 13:24:15.841186 1391899 out.go:270]   Jan 27 13:23:49 old-k8s-version-813213 kubelet[662]: E0127 13:23:49.569246     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	  Jan 27 13:23:49 old-k8s-version-813213 kubelet[662]: E0127 13:23:49.569246     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:15.841217 1391899 out.go:270]   Jan 27 13:23:54 old-k8s-version-813213 kubelet[662]: E0127 13:23:54.569362     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jan 27 13:23:54 old-k8s-version-813213 kubelet[662]: E0127 13:23:54.569362     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:15.841278 1391899 out.go:270]   Jan 27 13:24:03 old-k8s-version-813213 kubelet[662]: E0127 13:24:03.568788     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	  Jan 27 13:24:03 old-k8s-version-813213 kubelet[662]: E0127 13:24:03.568788     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:15.841310 1391899 out.go:270]   Jan 27 13:24:05 old-k8s-version-813213 kubelet[662]: E0127 13:24:05.569484     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jan 27 13:24:05 old-k8s-version-813213 kubelet[662]: E0127 13:24:05.569484     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:15.841339 1391899 out.go:270]   Jan 27 13:24:14 old-k8s-version-813213 kubelet[662]: E0127 13:24:14.569118     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	  Jan 27 13:24:14 old-k8s-version-813213 kubelet[662]: E0127 13:24:14.569118     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	I0127 13:24:15.841385 1391899 out.go:358] Setting ErrFile to fd 2...
	I0127 13:24:15.841414 1391899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:24:25.842651 1391899 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0127 13:24:25.852586 1391899 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0127 13:24:25.855881 1391899 out.go:201] 
	W0127 13:24:25.858640 1391899 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0127 13:24:25.858717 1391899 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0127 13:24:25.858737 1391899 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0127 13:24:25.858743 1391899 out.go:270] * 
	* 
	W0127 13:24:25.860146 1391899 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 13:24:25.861945 1391899 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-813213 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-813213
helpers_test.go:235: (dbg) docker inspect old-k8s-version-813213:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "01d0bc6920ab333099edf1003276a07c986ab78ad4863f8c4206becdeb1ce19b",
	        "Created": "2025-01-27T13:15:07.409447941Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1392093,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-01-27T13:18:14.240947954Z",
	            "FinishedAt": "2025-01-27T13:18:12.850680659Z"
	        },
	        "Image": "sha256:0434cf58b6dbace281e5de753aa4b2e3fe33dc9a3be53021531403743c3f155a",
	        "ResolvConfPath": "/var/lib/docker/containers/01d0bc6920ab333099edf1003276a07c986ab78ad4863f8c4206becdeb1ce19b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/01d0bc6920ab333099edf1003276a07c986ab78ad4863f8c4206becdeb1ce19b/hostname",
	        "HostsPath": "/var/lib/docker/containers/01d0bc6920ab333099edf1003276a07c986ab78ad4863f8c4206becdeb1ce19b/hosts",
	        "LogPath": "/var/lib/docker/containers/01d0bc6920ab333099edf1003276a07c986ab78ad4863f8c4206becdeb1ce19b/01d0bc6920ab333099edf1003276a07c986ab78ad4863f8c4206becdeb1ce19b-json.log",
	        "Name": "/old-k8s-version-813213",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-813213:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-813213",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/016debb10dacf6bae7dfea2f47ccd39925ae9e9855d17209bb2fce2a397f34b2-init/diff:/var/lib/docker/overlay2/040f98a182d1ab4d08a5b3f3ff6e1a3c8ab5a734c543c8ed242541f9c435fd6a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/016debb10dacf6bae7dfea2f47ccd39925ae9e9855d17209bb2fce2a397f34b2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/016debb10dacf6bae7dfea2f47ccd39925ae9e9855d17209bb2fce2a397f34b2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/016debb10dacf6bae7dfea2f47ccd39925ae9e9855d17209bb2fce2a397f34b2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-813213",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-813213/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-813213",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-813213",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-813213",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b109f4da99beb548df2edf2aced2b50638a8e97c3385dc4ba1a13d90541d7a53",
	            "SandboxKey": "/var/run/docker/netns/b109f4da99be",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34227"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34228"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34231"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34229"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34230"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-813213": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "7a4039aaf07ee86cf251afb61766a26e4e33a7d3cffa9eb8f0bfae29a1c2990f",
	                    "EndpointID": "c7959dd6a08541060b1cc125516e4b85be2b677abe057c2f27964c9c1544149a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-813213",
	                        "01d0bc6920ab"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-813213 -n old-k8s-version-813213
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-813213 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-813213 logs -n 25: (2.52189905s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-135138                              | cert-expiration-135138   | jenkins | v1.35.0 | 27 Jan 25 13:13 UTC | 27 Jan 25 13:14 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | force-systemd-env-852325                               | force-systemd-env-852325 | jenkins | v1.35.0 | 27 Jan 25 13:14 UTC | 27 Jan 25 13:14 UTC |
	|         | ssh cat                                                |                          |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                          |         |         |                     |                     |
	| delete  | -p force-systemd-env-852325                            | force-systemd-env-852325 | jenkins | v1.35.0 | 27 Jan 25 13:14 UTC | 27 Jan 25 13:14 UTC |
	| start   | -p cert-options-511343                                 | cert-options-511343      | jenkins | v1.35.0 | 27 Jan 25 13:14 UTC | 27 Jan 25 13:14 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                          |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                          |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                          |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                          |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | cert-options-511343 ssh                                | cert-options-511343      | jenkins | v1.35.0 | 27 Jan 25 13:14 UTC | 27 Jan 25 13:14 UTC |
	|         | openssl x509 -text -noout -in                          |                          |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                          |         |         |                     |                     |
	| ssh     | -p cert-options-511343 -- sudo                         | cert-options-511343      | jenkins | v1.35.0 | 27 Jan 25 13:14 UTC | 27 Jan 25 13:14 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                          |         |         |                     |                     |
	| delete  | -p cert-options-511343                                 | cert-options-511343      | jenkins | v1.35.0 | 27 Jan 25 13:14 UTC | 27 Jan 25 13:14 UTC |
	| start   | -p old-k8s-version-813213                              | old-k8s-version-813213   | jenkins | v1.35.0 | 27 Jan 25 13:14 UTC | 27 Jan 25 13:17 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| start   | -p cert-expiration-135138                              | cert-expiration-135138   | jenkins | v1.35.0 | 27 Jan 25 13:17 UTC | 27 Jan 25 13:17 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| delete  | -p cert-expiration-135138                              | cert-expiration-135138   | jenkins | v1.35.0 | 27 Jan 25 13:17 UTC | 27 Jan 25 13:17 UTC |
	| start   | -p no-preload-181914                                   | no-preload-181914        | jenkins | v1.35.0 | 27 Jan 25 13:17 UTC | 27 Jan 25 13:18 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-813213        | old-k8s-version-813213   | jenkins | v1.35.0 | 27 Jan 25 13:17 UTC | 27 Jan 25 13:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p old-k8s-version-813213                              | old-k8s-version-813213   | jenkins | v1.35.0 | 27 Jan 25 13:18 UTC | 27 Jan 25 13:18 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-813213             | old-k8s-version-813213   | jenkins | v1.35.0 | 27 Jan 25 13:18 UTC | 27 Jan 25 13:18 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p old-k8s-version-813213                              | old-k8s-version-813213   | jenkins | v1.35.0 | 27 Jan 25 13:18 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-181914             | no-preload-181914        | jenkins | v1.35.0 | 27 Jan 25 13:18 UTC | 27 Jan 25 13:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p no-preload-181914                                   | no-preload-181914        | jenkins | v1.35.0 | 27 Jan 25 13:18 UTC | 27 Jan 25 13:19 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-181914                  | no-preload-181914        | jenkins | v1.35.0 | 27 Jan 25 13:19 UTC | 27 Jan 25 13:19 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p no-preload-181914                                   | no-preload-181914        | jenkins | v1.35.0 | 27 Jan 25 13:19 UTC | 27 Jan 25 13:23 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                          |         |         |                     |                     |
	| image   | no-preload-181914 image list                           | no-preload-181914        | jenkins | v1.35.0 | 27 Jan 25 13:24 UTC | 27 Jan 25 13:24 UTC |
	|         | --format=json                                          |                          |         |         |                     |                     |
	| pause   | -p no-preload-181914                                   | no-preload-181914        | jenkins | v1.35.0 | 27 Jan 25 13:24 UTC | 27 Jan 25 13:24 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| unpause | -p no-preload-181914                                   | no-preload-181914        | jenkins | v1.35.0 | 27 Jan 25 13:24 UTC | 27 Jan 25 13:24 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| delete  | -p no-preload-181914                                   | no-preload-181914        | jenkins | v1.35.0 | 27 Jan 25 13:24 UTC | 27 Jan 25 13:24 UTC |
	| delete  | -p no-preload-181914                                   | no-preload-181914        | jenkins | v1.35.0 | 27 Jan 25 13:24 UTC | 27 Jan 25 13:24 UTC |
	| start   | -p embed-certs-434512                                  | embed-certs-434512       | jenkins | v1.35.0 | 27 Jan 25 13:24 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                          |         |         |                     |                     |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 13:24:13
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 13:24:13.918521 1402708 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:24:13.918726 1402708 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:24:13.918754 1402708 out.go:358] Setting ErrFile to fd 2...
	I0127 13:24:13.918774 1402708 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:24:13.919073 1402708 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-1181389/.minikube/bin
	I0127 13:24:13.919814 1402708 out.go:352] Setting JSON to false
	I0127 13:24:13.921438 1402708 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":21999,"bootTime":1737962255,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0127 13:24:13.921543 1402708 start.go:139] virtualization:  
	I0127 13:24:13.925324 1402708 out.go:177] * [embed-certs-434512] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0127 13:24:13.928489 1402708 notify.go:220] Checking for updates...
	I0127 13:24:13.932245 1402708 out.go:177]   - MINIKUBE_LOCATION=20317
	I0127 13:24:13.934964 1402708 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 13:24:13.937552 1402708 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20317-1181389/kubeconfig
	I0127 13:24:13.940293 1402708 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-1181389/.minikube
	I0127 13:24:13.942996 1402708 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0127 13:24:13.945710 1402708 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 13:24:13.949156 1402708 config.go:182] Loaded profile config "old-k8s-version-813213": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0127 13:24:13.949293 1402708 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 13:24:13.987890 1402708 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 13:24:13.988000 1402708 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 13:24:14.101532 1402708 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2025-01-27 13:24:14.085872952 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 13:24:14.101672 1402708 docker.go:318] overlay module found
	I0127 13:24:14.106440 1402708 out.go:177] * Using the docker driver based on user configuration
	I0127 13:24:14.108996 1402708 start.go:297] selected driver: docker
	I0127 13:24:14.109014 1402708 start.go:901] validating driver "docker" against <nil>
	I0127 13:24:14.109069 1402708 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 13:24:14.109952 1402708 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 13:24:14.222233 1402708 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2025-01-27 13:24:14.210058419 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 13:24:14.222462 1402708 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 13:24:14.222752 1402708 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 13:24:14.225528 1402708 out.go:177] * Using Docker driver with root privileges
	I0127 13:24:14.228158 1402708 cni.go:84] Creating CNI manager for ""
	I0127 13:24:14.228232 1402708 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0127 13:24:14.228244 1402708 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0127 13:24:14.228329 1402708 start.go:340] cluster config:
	{Name:embed-certs-434512 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-434512 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAge
ntPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:24:14.231216 1402708 out.go:177] * Starting "embed-certs-434512" primary control-plane node in "embed-certs-434512" cluster
	I0127 13:24:14.233857 1402708 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0127 13:24:14.236575 1402708 out.go:177] * Pulling base image v0.0.46 ...
	I0127 13:24:14.239208 1402708 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 13:24:14.239262 1402708 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20317-1181389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4
	I0127 13:24:14.239270 1402708 cache.go:56] Caching tarball of preloaded images
	I0127 13:24:14.239328 1402708 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0127 13:24:14.239598 1402708 preload.go:172] Found /home/jenkins/minikube-integration/20317-1181389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0127 13:24:14.239613 1402708 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
	I0127 13:24:14.239717 1402708 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/embed-certs-434512/config.json ...
	I0127 13:24:14.239733 1402708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/embed-certs-434512/config.json: {Name:mk7721c0da76923e66fe0d486f38160c27950491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:24:14.262893 1402708 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I0127 13:24:14.262912 1402708 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I0127 13:24:14.262924 1402708 cache.go:227] Successfully downloaded all kic artifacts
	I0127 13:24:14.262946 1402708 start.go:360] acquireMachinesLock for embed-certs-434512: {Name:mk2586b0657c09793a36438ce1b60de336afbd2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:24:14.263057 1402708 start.go:364] duration metric: took 95.718µs to acquireMachinesLock for "embed-certs-434512"
	I0127 13:24:14.263082 1402708 start.go:93] Provisioning new machine with config: &{Name:embed-certs-434512 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-434512 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 13:24:14.263153 1402708 start.go:125] createHost starting for "" (driver="docker")
	I0127 13:24:13.636050 1391899 cri.go:89] found id: "6987b703de853a384b6187fa686ac707375e62efa22618ca57d58b0628846579"
	I0127 13:24:13.636069 1391899 cri.go:89] found id: "9977eebb81ceefd915fb8ae9adc54fd8743f8453af41100c841a3b59d1f3e72c"
	I0127 13:24:13.636074 1391899 cri.go:89] found id: ""
	I0127 13:24:13.636081 1391899 logs.go:282] 2 containers: [6987b703de853a384b6187fa686ac707375e62efa22618ca57d58b0628846579 9977eebb81ceefd915fb8ae9adc54fd8743f8453af41100c841a3b59d1f3e72c]
	I0127 13:24:13.636140 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:13.641250 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:13.645845 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:24:13.645910 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:24:13.730109 1391899 cri.go:89] found id: "498c359719101282d021d8fef3d5d3ebaf865f947cfaafe3e0ea022abbc87f53"
	I0127 13:24:13.730126 1391899 cri.go:89] found id: "4b2832f8237f04564577430293388c25c8df7894084ea4413f6ef816262c7bbc"
	I0127 13:24:13.730131 1391899 cri.go:89] found id: ""
	I0127 13:24:13.730138 1391899 logs.go:282] 2 containers: [498c359719101282d021d8fef3d5d3ebaf865f947cfaafe3e0ea022abbc87f53 4b2832f8237f04564577430293388c25c8df7894084ea4413f6ef816262c7bbc]
	I0127 13:24:13.730188 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:13.735061 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:13.739961 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:24:13.740030 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:24:13.793549 1391899 cri.go:89] found id: "53392608d921e89b00a7a95975a674e420d59a11fbdfeb24bfb54ebe456ab676"
	I0127 13:24:13.793568 1391899 cri.go:89] found id: "2cbee0b466a6c9c956ae17e795d58e2af64abe7cc2a2822686196036bfcbe2e6"
	I0127 13:24:13.793573 1391899 cri.go:89] found id: ""
	I0127 13:24:13.793580 1391899 logs.go:282] 2 containers: [53392608d921e89b00a7a95975a674e420d59a11fbdfeb24bfb54ebe456ab676 2cbee0b466a6c9c956ae17e795d58e2af64abe7cc2a2822686196036bfcbe2e6]
	I0127 13:24:13.793635 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:13.798974 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:13.803128 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:24:13.803199 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:24:13.865547 1391899 cri.go:89] found id: "348496f0d58f24df8178fd9da80e0b658cd42c30cb16a5c7f5b4fed5e96bc2c6"
	I0127 13:24:13.865586 1391899 cri.go:89] found id: "fca0636811bd616b25b61e58c05bade814ad56fe3e02bfa2b5cfa584cbec2b13"
	I0127 13:24:13.865591 1391899 cri.go:89] found id: ""
	I0127 13:24:13.865597 1391899 logs.go:282] 2 containers: [348496f0d58f24df8178fd9da80e0b658cd42c30cb16a5c7f5b4fed5e96bc2c6 fca0636811bd616b25b61e58c05bade814ad56fe3e02bfa2b5cfa584cbec2b13]
	I0127 13:24:13.865654 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:13.869602 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:13.873071 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0127 13:24:13.873189 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:24:13.920522 1391899 cri.go:89] found id: "98357c86477c2e61a73e06a97b5bda6b521689ebe81b9245a2adf7c79daceef7"
	I0127 13:24:13.920541 1391899 cri.go:89] found id: "8f9ba8ca38617dd4c57c48f007e00144c96edc8377410ac33996e89c6c626ae7"
	I0127 13:24:13.920546 1391899 cri.go:89] found id: ""
	I0127 13:24:13.920553 1391899 logs.go:282] 2 containers: [98357c86477c2e61a73e06a97b5bda6b521689ebe81b9245a2adf7c79daceef7 8f9ba8ca38617dd4c57c48f007e00144c96edc8377410ac33996e89c6c626ae7]
	I0127 13:24:13.920606 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:13.924728 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:13.928717 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0127 13:24:13.928777 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0127 13:24:13.981265 1391899 cri.go:89] found id: "ffc35dde525c7cc22c06eeef64a37735aeec2d5e82f29e5ae8ca6de1093abde9"
	I0127 13:24:13.981289 1391899 cri.go:89] found id: "eb3408e2536487de3f2f4d60740cac93d750307a3768323cc24df540e774eb8c"
	I0127 13:24:13.981294 1391899 cri.go:89] found id: ""
	I0127 13:24:13.981300 1391899 logs.go:282] 2 containers: [ffc35dde525c7cc22c06eeef64a37735aeec2d5e82f29e5ae8ca6de1093abde9 eb3408e2536487de3f2f4d60740cac93d750307a3768323cc24df540e774eb8c]
	I0127 13:24:13.981386 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:13.985260 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:13.988991 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:24:13.989131 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:24:14.041772 1391899 cri.go:89] found id: "84b4623c8ca9caacba41fa6535bc6f3dc8e0c52347d8ef68c993a81a7da957d1"
	I0127 13:24:14.041793 1391899 cri.go:89] found id: ""
	I0127 13:24:14.041801 1391899 logs.go:282] 1 containers: [84b4623c8ca9caacba41fa6535bc6f3dc8e0c52347d8ef68c993a81a7da957d1]
	I0127 13:24:14.041860 1391899 ssh_runner.go:195] Run: which crictl
	I0127 13:24:14.045758 1391899 logs.go:123] Gathering logs for kube-apiserver [dbf3fe12cc514cdb259c989c31b00ebdd7cf91640d7cf28f457a8a8ee0d4a0ba] ...
	I0127 13:24:14.045783 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbf3fe12cc514cdb259c989c31b00ebdd7cf91640d7cf28f457a8a8ee0d4a0ba"
	I0127 13:24:14.119271 1391899 logs.go:123] Gathering logs for coredns [9977eebb81ceefd915fb8ae9adc54fd8743f8453af41100c841a3b59d1f3e72c] ...
	I0127 13:24:14.119329 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9977eebb81ceefd915fb8ae9adc54fd8743f8453af41100c841a3b59d1f3e72c"
	I0127 13:24:14.184713 1391899 logs.go:123] Gathering logs for dmesg ...
	I0127 13:24:14.184744 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:24:14.205804 1391899 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:24:14.205839 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0127 13:24:14.425188 1391899 logs.go:123] Gathering logs for kube-apiserver [9dc682ca643e0a8340eec4be4ed13358824df730d5eace4f73db56ba726b7da7] ...
	I0127 13:24:14.425269 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9dc682ca643e0a8340eec4be4ed13358824df730d5eace4f73db56ba726b7da7"
	I0127 13:24:14.513059 1391899 logs.go:123] Gathering logs for kindnet [8f9ba8ca38617dd4c57c48f007e00144c96edc8377410ac33996e89c6c626ae7] ...
	I0127 13:24:14.513133 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f9ba8ca38617dd4c57c48f007e00144c96edc8377410ac33996e89c6c626ae7"
	I0127 13:24:14.569064 1391899 logs.go:123] Gathering logs for storage-provisioner [eb3408e2536487de3f2f4d60740cac93d750307a3768323cc24df540e774eb8c] ...
	I0127 13:24:14.569092 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb3408e2536487de3f2f4d60740cac93d750307a3768323cc24df540e774eb8c"
	I0127 13:24:14.641486 1391899 logs.go:123] Gathering logs for kubelet ...
	I0127 13:24:14.641555 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0127 13:24:14.715359 1391899 logs.go:138] Found kubelet problem: Jan 27 13:18:40 old-k8s-version-813213 kubelet[662]: E0127 13:18:40.161443     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 13:24:14.716229 1391899 logs.go:138] Found kubelet problem: Jan 27 13:18:40 old-k8s-version-813213 kubelet[662]: E0127 13:18:40.742912     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.719582 1391899 logs.go:138] Found kubelet problem: Jan 27 13:18:52 old-k8s-version-813213 kubelet[662]: E0127 13:18:52.582413     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 13:24:14.721888 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:00 old-k8s-version-813213 kubelet[662]: E0127 13:19:00.834386     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.722259 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:01 old-k8s-version-813213 kubelet[662]: E0127 13:19:01.844499     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.722476 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:04 old-k8s-version-813213 kubelet[662]: E0127 13:19:04.569022     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.722844 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:08 old-k8s-version-813213 kubelet[662]: E0127 13:19:08.040193     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.723653 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:10 old-k8s-version-813213 kubelet[662]: E0127 13:19:10.866611     662 pod_workers.go:191] Error syncing pod b3ee3aee-1b8f-4040-9cbf-f87cb41abfd5 ("storage-provisioner_kube-system(b3ee3aee-1b8f-4040-9cbf-f87cb41abfd5)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b3ee3aee-1b8f-4040-9cbf-f87cb41abfd5)"
	W0127 13:24:14.726364 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:16 old-k8s-version-813213 kubelet[662]: E0127 13:19:16.578183     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 13:24:14.727343 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:21 old-k8s-version-813213 kubelet[662]: E0127 13:19:21.914690     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.727830 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:28 old-k8s-version-813213 kubelet[662]: E0127 13:19:28.040605     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.728048 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:31 old-k8s-version-813213 kubelet[662]: E0127 13:19:31.569400     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.728403 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:40 old-k8s-version-813213 kubelet[662]: E0127 13:19:40.569017     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.728614 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:46 old-k8s-version-813213 kubelet[662]: E0127 13:19:46.569235     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.729266 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:54 old-k8s-version-813213 kubelet[662]: E0127 13:19:54.998749     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.729680 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:58 old-k8s-version-813213 kubelet[662]: E0127 13:19:58.040157     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.732147 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:00 old-k8s-version-813213 kubelet[662]: E0127 13:20:00.591029     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 13:24:14.732514 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:08 old-k8s-version-813213 kubelet[662]: E0127 13:20:08.568750     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.732722 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:14 old-k8s-version-813213 kubelet[662]: E0127 13:20:14.569356     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.733086 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:20 old-k8s-version-813213 kubelet[662]: E0127 13:20:20.568757     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.733291 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:25 old-k8s-version-813213 kubelet[662]: E0127 13:20:25.569626     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.733665 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:33 old-k8s-version-813213 kubelet[662]: E0127 13:20:33.569406     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.733881 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:37 old-k8s-version-813213 kubelet[662]: E0127 13:20:37.569368     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.734571 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:45 old-k8s-version-813213 kubelet[662]: E0127 13:20:45.161804     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.734937 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:48 old-k8s-version-813213 kubelet[662]: E0127 13:20:48.040150     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.735156 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:51 old-k8s-version-813213 kubelet[662]: E0127 13:20:51.569490     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.735527 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:02 old-k8s-version-813213 kubelet[662]: E0127 13:21:02.568801     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.735733 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:04 old-k8s-version-813213 kubelet[662]: E0127 13:21:04.569222     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.735945 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:15 old-k8s-version-813213 kubelet[662]: E0127 13:21:15.569338     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.736307 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:16 old-k8s-version-813213 kubelet[662]: E0127 13:21:16.568781     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.736659 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:30 old-k8s-version-813213 kubelet[662]: E0127 13:21:30.569462     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.739220 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:30 old-k8s-version-813213 kubelet[662]: E0127 13:21:30.578384     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 13:24:14.739590 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:41 old-k8s-version-813213 kubelet[662]: E0127 13:21:41.569455     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.739807 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:42 old-k8s-version-813213 kubelet[662]: E0127 13:21:42.569422     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.740162 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:52 old-k8s-version-813213 kubelet[662]: E0127 13:21:52.568642     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.740367 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:57 old-k8s-version-813213 kubelet[662]: E0127 13:21:57.570339     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.740713 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:03 old-k8s-version-813213 kubelet[662]: E0127 13:22:03.569863     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.740923 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:12 old-k8s-version-813213 kubelet[662]: E0127 13:22:12.569369     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.741548 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:15 old-k8s-version-813213 kubelet[662]: E0127 13:22:15.386979     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.741905 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:18 old-k8s-version-813213 kubelet[662]: E0127 13:22:18.040158     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.742108 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:26 old-k8s-version-813213 kubelet[662]: E0127 13:22:26.569341     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.742460 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:30 old-k8s-version-813213 kubelet[662]: E0127 13:22:30.568782     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.742682 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:37 old-k8s-version-813213 kubelet[662]: E0127 13:22:37.572662     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.743095 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:41 old-k8s-version-813213 kubelet[662]: E0127 13:22:41.568879     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.743281 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:49 old-k8s-version-813213 kubelet[662]: E0127 13:22:49.569740     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.743626 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:56 old-k8s-version-813213 kubelet[662]: E0127 13:22:56.568752     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.743813 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:03 old-k8s-version-813213 kubelet[662]: E0127 13:23:03.569135     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.744134 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:11 old-k8s-version-813213 kubelet[662]: E0127 13:23:11.569770     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.744341 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:16 old-k8s-version-813213 kubelet[662]: E0127 13:23:16.569194     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.744684 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:26 old-k8s-version-813213 kubelet[662]: E0127 13:23:26.568839     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.744893 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:28 old-k8s-version-813213 kubelet[662]: E0127 13:23:28.569744     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.745252 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:37 old-k8s-version-813213 kubelet[662]: E0127 13:23:37.568849     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.745454 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:41 old-k8s-version-813213 kubelet[662]: E0127 13:23:41.573539     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.745819 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:49 old-k8s-version-813213 kubelet[662]: E0127 13:23:49.569246     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.746065 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:54 old-k8s-version-813213 kubelet[662]: E0127 13:23:54.569362     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.746417 1391899 logs.go:138] Found kubelet problem: Jan 27 13:24:03 old-k8s-version-813213 kubelet[662]: E0127 13:24:03.568788     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:14.746624 1391899 logs.go:138] Found kubelet problem: Jan 27 13:24:05 old-k8s-version-813213 kubelet[662]: E0127 13:24:05.569484     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:14.746985 1391899 logs.go:138] Found kubelet problem: Jan 27 13:24:14 old-k8s-version-813213 kubelet[662]: E0127 13:24:14.569118     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	I0127 13:24:14.747000 1391899 logs.go:123] Gathering logs for coredns [6987b703de853a384b6187fa686ac707375e62efa22618ca57d58b0628846579] ...
	I0127 13:24:14.747029 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6987b703de853a384b6187fa686ac707375e62efa22618ca57d58b0628846579"
	I0127 13:24:14.802969 1391899 logs.go:123] Gathering logs for kube-proxy [2cbee0b466a6c9c956ae17e795d58e2af64abe7cc2a2822686196036bfcbe2e6] ...
	I0127 13:24:14.802997 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cbee0b466a6c9c956ae17e795d58e2af64abe7cc2a2822686196036bfcbe2e6"
	I0127 13:24:14.885354 1391899 logs.go:123] Gathering logs for kube-proxy [53392608d921e89b00a7a95975a674e420d59a11fbdfeb24bfb54ebe456ab676] ...
	I0127 13:24:14.885377 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53392608d921e89b00a7a95975a674e420d59a11fbdfeb24bfb54ebe456ab676"
	I0127 13:24:14.943705 1391899 logs.go:123] Gathering logs for kube-controller-manager [348496f0d58f24df8178fd9da80e0b658cd42c30cb16a5c7f5b4fed5e96bc2c6] ...
	I0127 13:24:14.943731 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 348496f0d58f24df8178fd9da80e0b658cd42c30cb16a5c7f5b4fed5e96bc2c6"
	I0127 13:24:15.004077 1391899 logs.go:123] Gathering logs for storage-provisioner [ffc35dde525c7cc22c06eeef64a37735aeec2d5e82f29e5ae8ca6de1093abde9] ...
	I0127 13:24:15.004163 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffc35dde525c7cc22c06eeef64a37735aeec2d5e82f29e5ae8ca6de1093abde9"
	I0127 13:24:15.066986 1391899 logs.go:123] Gathering logs for kubernetes-dashboard [84b4623c8ca9caacba41fa6535bc6f3dc8e0c52347d8ef68c993a81a7da957d1] ...
	I0127 13:24:15.067101 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84b4623c8ca9caacba41fa6535bc6f3dc8e0c52347d8ef68c993a81a7da957d1"
	I0127 13:24:15.157150 1391899 logs.go:123] Gathering logs for etcd [207271fc8e8b70d52e363f5f48188c01f5185e49f7efced711705901058b9f7f] ...
	I0127 13:24:15.157183 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 207271fc8e8b70d52e363f5f48188c01f5185e49f7efced711705901058b9f7f"
	I0127 13:24:15.234051 1391899 logs.go:123] Gathering logs for etcd [f5b735d4310b352e1f2a8c5e9bc269f963d80c5a3e1cc6a8016c1873e8e598a5] ...
	I0127 13:24:15.234091 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5b735d4310b352e1f2a8c5e9bc269f963d80c5a3e1cc6a8016c1873e8e598a5"
	I0127 13:24:15.331724 1391899 logs.go:123] Gathering logs for kube-scheduler [4b2832f8237f04564577430293388c25c8df7894084ea4413f6ef816262c7bbc] ...
	I0127 13:24:15.331918 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b2832f8237f04564577430293388c25c8df7894084ea4413f6ef816262c7bbc"
	I0127 13:24:15.411973 1391899 logs.go:123] Gathering logs for containerd ...
	I0127 13:24:15.412006 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0127 13:24:15.508734 1391899 logs.go:123] Gathering logs for container status ...
	I0127 13:24:15.508770 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:24:15.593697 1391899 logs.go:123] Gathering logs for kube-scheduler [498c359719101282d021d8fef3d5d3ebaf865f947cfaafe3e0ea022abbc87f53] ...
	I0127 13:24:15.593769 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 498c359719101282d021d8fef3d5d3ebaf865f947cfaafe3e0ea022abbc87f53"
	I0127 13:24:15.652854 1391899 logs.go:123] Gathering logs for kube-controller-manager [fca0636811bd616b25b61e58c05bade814ad56fe3e02bfa2b5cfa584cbec2b13] ...
	I0127 13:24:15.652938 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fca0636811bd616b25b61e58c05bade814ad56fe3e02bfa2b5cfa584cbec2b13"
	I0127 13:24:15.783362 1391899 logs.go:123] Gathering logs for kindnet [98357c86477c2e61a73e06a97b5bda6b521689ebe81b9245a2adf7c79daceef7] ...
	I0127 13:24:15.783455 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98357c86477c2e61a73e06a97b5bda6b521689ebe81b9245a2adf7c79daceef7"
	I0127 13:24:15.840977 1391899 out.go:358] Setting ErrFile to fd 2...
	I0127 13:24:15.841062 1391899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0127 13:24:15.841144 1391899 out.go:270] X Problems detected in kubelet:
	W0127 13:24:15.841186 1391899 out.go:270]   Jan 27 13:23:49 old-k8s-version-813213 kubelet[662]: E0127 13:23:49.569246     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:15.841217 1391899 out.go:270]   Jan 27 13:23:54 old-k8s-version-813213 kubelet[662]: E0127 13:23:54.569362     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:15.841278 1391899 out.go:270]   Jan 27 13:24:03 old-k8s-version-813213 kubelet[662]: E0127 13:24:03.568788     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	W0127 13:24:15.841310 1391899 out.go:270]   Jan 27 13:24:05 old-k8s-version-813213 kubelet[662]: E0127 13:24:05.569484     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 13:24:15.841339 1391899 out.go:270]   Jan 27 13:24:14 old-k8s-version-813213 kubelet[662]: E0127 13:24:14.569118     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	I0127 13:24:15.841385 1391899 out.go:358] Setting ErrFile to fd 2...
	I0127 13:24:15.841414 1391899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:24:14.266605 1402708 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0127 13:24:14.266852 1402708 start.go:159] libmachine.API.Create for "embed-certs-434512" (driver="docker")
	I0127 13:24:14.266877 1402708 client.go:168] LocalClient.Create starting
	I0127 13:24:14.266933 1402708 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20317-1181389/.minikube/certs/ca.pem
	I0127 13:24:14.266963 1402708 main.go:141] libmachine: Decoding PEM data...
	I0127 13:24:14.266977 1402708 main.go:141] libmachine: Parsing certificate...
	I0127 13:24:14.267037 1402708 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20317-1181389/.minikube/certs/cert.pem
	I0127 13:24:14.267059 1402708 main.go:141] libmachine: Decoding PEM data...
	I0127 13:24:14.267068 1402708 main.go:141] libmachine: Parsing certificate...
	I0127 13:24:14.267420 1402708 cli_runner.go:164] Run: docker network inspect embed-certs-434512 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0127 13:24:14.287550 1402708 cli_runner.go:211] docker network inspect embed-certs-434512 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0127 13:24:14.287636 1402708 network_create.go:284] running [docker network inspect embed-certs-434512] to gather additional debugging logs...
	I0127 13:24:14.287658 1402708 cli_runner.go:164] Run: docker network inspect embed-certs-434512
	W0127 13:24:14.319135 1402708 cli_runner.go:211] docker network inspect embed-certs-434512 returned with exit code 1
	I0127 13:24:14.319169 1402708 network_create.go:287] error running [docker network inspect embed-certs-434512]: docker network inspect embed-certs-434512: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-434512 not found
	I0127 13:24:14.319182 1402708 network_create.go:289] output of [docker network inspect embed-certs-434512]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-434512 not found
	
	** /stderr **
	I0127 13:24:14.319324 1402708 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0127 13:24:14.344907 1402708 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0f9fe3033877 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:e9:d1:42:e8} reservation:<nil>}
	I0127 13:24:14.345331 1402708 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-44e0458e836e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:9e:9c:84:ef} reservation:<nil>}
	I0127 13:24:14.345675 1402708 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-4f5264b447e0 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:0f:c7:a8:12} reservation:<nil>}
	I0127 13:24:14.346085 1402708 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-7a4039aaf07e IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:f5:2f:95:ee} reservation:<nil>}
	I0127 13:24:14.346554 1402708 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019aec00}
	I0127 13:24:14.346575 1402708 network_create.go:124] attempt to create docker network embed-certs-434512 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0127 13:24:14.346635 1402708 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-434512 embed-certs-434512
	I0127 13:24:14.450744 1402708 network_create.go:108] docker network embed-certs-434512 192.168.85.0/24 created
	I0127 13:24:14.450773 1402708 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-434512" container
	I0127 13:24:14.450851 1402708 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0127 13:24:14.471422 1402708 cli_runner.go:164] Run: docker volume create embed-certs-434512 --label name.minikube.sigs.k8s.io=embed-certs-434512 --label created_by.minikube.sigs.k8s.io=true
	I0127 13:24:14.516120 1402708 oci.go:103] Successfully created a docker volume embed-certs-434512
	I0127 13:24:14.516201 1402708 cli_runner.go:164] Run: docker run --rm --name embed-certs-434512-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-434512 --entrypoint /usr/bin/test -v embed-certs-434512:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
	I0127 13:24:15.347955 1402708 oci.go:107] Successfully prepared a docker volume embed-certs-434512
	I0127 13:24:15.348004 1402708 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 13:24:15.348025 1402708 kic.go:194] Starting extracting preloaded images to volume ...
	I0127 13:24:15.348097 1402708 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20317-1181389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-434512:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
	I0127 13:24:20.103137 1402708 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20317-1181389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-434512:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir: (4.754994458s)
	I0127 13:24:20.103172 1402708 kic.go:203] duration metric: took 4.755142909s to extract preloaded images to volume ...
	W0127 13:24:20.103329 1402708 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0127 13:24:20.103451 1402708 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0127 13:24:20.160922 1402708 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-434512 --name embed-certs-434512 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-434512 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-434512 --network embed-certs-434512 --ip 192.168.85.2 --volume embed-certs-434512:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279
	I0127 13:24:20.527593 1402708 cli_runner.go:164] Run: docker container inspect embed-certs-434512 --format={{.State.Running}}
	I0127 13:24:20.549013 1402708 cli_runner.go:164] Run: docker container inspect embed-certs-434512 --format={{.State.Status}}
	I0127 13:24:20.570152 1402708 cli_runner.go:164] Run: docker exec embed-certs-434512 stat /var/lib/dpkg/alternatives/iptables
	I0127 13:24:20.628650 1402708 oci.go:144] the created container "embed-certs-434512" has a running status.
	I0127 13:24:20.628678 1402708 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20317-1181389/.minikube/machines/embed-certs-434512/id_rsa...
	I0127 13:24:21.028929 1402708 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20317-1181389/.minikube/machines/embed-certs-434512/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0127 13:24:21.053271 1402708 cli_runner.go:164] Run: docker container inspect embed-certs-434512 --format={{.State.Status}}
	I0127 13:24:21.079154 1402708 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0127 13:24:21.079184 1402708 kic_runner.go:114] Args: [docker exec --privileged embed-certs-434512 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0127 13:24:21.157729 1402708 cli_runner.go:164] Run: docker container inspect embed-certs-434512 --format={{.State.Status}}
	I0127 13:24:21.180889 1402708 machine.go:93] provisionDockerMachine start ...
	I0127 13:24:21.180995 1402708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-434512
	I0127 13:24:21.210101 1402708 main.go:141] libmachine: Using SSH client type: native
	I0127 13:24:21.210377 1402708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 34237 <nil> <nil>}
	I0127 13:24:21.210387 1402708 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 13:24:21.211088 1402708 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0127 13:24:25.842651 1391899 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0127 13:24:25.852586 1391899 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0127 13:24:25.855881 1391899 out.go:201] 
	W0127 13:24:25.858640 1391899 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0127 13:24:25.858717 1391899 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0127 13:24:25.858737 1391899 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0127 13:24:25.858743 1391899 out.go:270] * 
	W0127 13:24:25.860146 1391899 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 13:24:25.861945 1391899 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	4764648b74cc8       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   11f3e4e3069a4       dashboard-metrics-scraper-8d5bb5db8-s2b59
	ffc35dde525c7       ba04bb24b9575       5 minutes ago       Running             storage-provisioner         3                   d49101632db92       storage-provisioner
	84b4623c8ca9c       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   4df906feafd9d       kubernetes-dashboard-cd95d586-r9xkk
	6987b703de853       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   8a7f803e96064       coredns-74ff55c5b-2phj4
	26f1a91abb824       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   6ed167208dd61       busybox
	eb3408e253648       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         2                   d49101632db92       storage-provisioner
	53392608d921e       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   89bf7b49b48f8       kube-proxy-8gl5q
	98357c86477c2       2be0bcf609c65       5 minutes ago       Running             kindnet-cni                 1                   a4e1d9b1a0f29       kindnet-h8gtn
	498c359719101       e7605f88f17d6       5 minutes ago       Running             kube-scheduler              1                   a66ab0ccf5e56       kube-scheduler-old-k8s-version-813213
	348496f0d58f2       1df8a2b116bd1       5 minutes ago       Running             kube-controller-manager     1                   d8ae462cc2a23       kube-controller-manager-old-k8s-version-813213
	9dc682ca643e0       2c08bbbc02d3a       5 minutes ago       Running             kube-apiserver              1                   d1e02c0b1c2c6       kube-apiserver-old-k8s-version-813213
	207271fc8e8b7       05b738aa1bc63       5 minutes ago       Running             etcd                        1                   11ee657758719       etcd-old-k8s-version-813213
	b39242a2da416       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   efc1fe61b1189       busybox
	9977eebb81cee       db91994f4ee8f       7 minutes ago       Exited              coredns                     0                   4cfe860a5526d       coredns-74ff55c5b-2phj4
	8f9ba8ca38617       2be0bcf609c65       8 minutes ago       Exited              kindnet-cni                 0                   ebe6dd711af48       kindnet-h8gtn
	2cbee0b466a6c       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   8c41273626fc9       kube-proxy-8gl5q
	4b2832f8237f0       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   98b5263e0b32a       kube-scheduler-old-k8s-version-813213
	fca0636811bd6       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   6e91b2824b110       kube-controller-manager-old-k8s-version-813213
	dbf3fe12cc514       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   9c1ff998f5514       kube-apiserver-old-k8s-version-813213
	f5b735d4310b3       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   3dd826d940bf8       etcd-old-k8s-version-813213
	
	
	==> containerd <==
	Jan 27 13:20:44 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:20:44.690258161Z" level=info msg="StartContainer for \"5a003e8153161f97e85ded01de4be26ff28056e62861500828c0e7b64d50233d\" returns successfully"
	Jan 27 13:20:44 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:20:44.703375837Z" level=info msg="received exit event container_id:\"5a003e8153161f97e85ded01de4be26ff28056e62861500828c0e7b64d50233d\" id:\"5a003e8153161f97e85ded01de4be26ff28056e62861500828c0e7b64d50233d\" pid:3140 exit_status:255 exited_at:{seconds:1737984044 nanos:703136754}"
	Jan 27 13:20:44 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:20:44.725554065Z" level=info msg="shim disconnected" id=5a003e8153161f97e85ded01de4be26ff28056e62861500828c0e7b64d50233d namespace=k8s.io
	Jan 27 13:20:44 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:20:44.725862308Z" level=warning msg="cleaning up after shim disconnected" id=5a003e8153161f97e85ded01de4be26ff28056e62861500828c0e7b64d50233d namespace=k8s.io
	Jan 27 13:20:44 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:20:44.725957025Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 27 13:20:45 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:20:45.167445954Z" level=info msg="RemoveContainer for \"1a73064466ad43ec312674bb19a55af972382c017099dff07f77b42f1ad2eb42\""
	Jan 27 13:20:45 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:20:45.174753839Z" level=info msg="RemoveContainer for \"1a73064466ad43ec312674bb19a55af972382c017099dff07f77b42f1ad2eb42\" returns successfully"
	Jan 27 13:21:30 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:21:30.570010381Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 13:21:30 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:21:30.575813494Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Jan 27 13:21:30 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:21:30.577791821Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Jan 27 13:21:30 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:21:30.577827824Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Jan 27 13:22:14 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:22:14.571272416Z" level=info msg="CreateContainer within sandbox \"11f3e4e3069a466ecdd7f4dbfcddf60d9fe8ad56cd24ed147cc6bbdaec30b31c\" for container name:\"dashboard-metrics-scraper\" attempt:5"
	Jan 27 13:22:14 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:22:14.589089480Z" level=info msg="CreateContainer within sandbox \"11f3e4e3069a466ecdd7f4dbfcddf60d9fe8ad56cd24ed147cc6bbdaec30b31c\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"4764648b74cc899aa13e2333e353bab858d41527b0eb96fb5851c7c92bb47ff0\""
	Jan 27 13:22:14 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:22:14.593093453Z" level=info msg="StartContainer for \"4764648b74cc899aa13e2333e353bab858d41527b0eb96fb5851c7c92bb47ff0\""
	Jan 27 13:22:14 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:22:14.669710665Z" level=info msg="StartContainer for \"4764648b74cc899aa13e2333e353bab858d41527b0eb96fb5851c7c92bb47ff0\" returns successfully"
	Jan 27 13:22:14 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:22:14.672955634Z" level=info msg="received exit event container_id:\"4764648b74cc899aa13e2333e353bab858d41527b0eb96fb5851c7c92bb47ff0\" id:\"4764648b74cc899aa13e2333e353bab858d41527b0eb96fb5851c7c92bb47ff0\" pid:3389 exit_status:255 exited_at:{seconds:1737984134 nanos:672607065}"
	Jan 27 13:22:14 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:22:14.706930066Z" level=info msg="shim disconnected" id=4764648b74cc899aa13e2333e353bab858d41527b0eb96fb5851c7c92bb47ff0 namespace=k8s.io
	Jan 27 13:22:14 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:22:14.706989691Z" level=warning msg="cleaning up after shim disconnected" id=4764648b74cc899aa13e2333e353bab858d41527b0eb96fb5851c7c92bb47ff0 namespace=k8s.io
	Jan 27 13:22:14 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:22:14.707000652Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 27 13:22:15 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:22:15.398697131Z" level=info msg="RemoveContainer for \"5a003e8153161f97e85ded01de4be26ff28056e62861500828c0e7b64d50233d\""
	Jan 27 13:22:15 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:22:15.416658405Z" level=info msg="RemoveContainer for \"5a003e8153161f97e85ded01de4be26ff28056e62861500828c0e7b64d50233d\" returns successfully"
	Jan 27 13:24:18 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:24:18.569544828Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 13:24:18 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:24:18.594620452Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Jan 27 13:24:18 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:24:18.596979274Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Jan 27 13:24:18 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:24:18.597011076Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [6987b703de853a384b6187fa686ac707375e62efa22618ca57d58b0628846579] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:39803 - 35571 "HINFO IN 3950191951765923051.2785443437838335599. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012433008s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I0127 13:19:10.625702       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-01-27 13:18:40.625079002 +0000 UTC m=+0.029752787) (total time: 30.000523026s):
	Trace[2019727887]: [30.000523026s] [30.000523026s] END
	E0127 13:19:10.625737       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0127 13:19:10.626646       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-01-27 13:18:40.626338177 +0000 UTC m=+0.031011962) (total time: 30.000281006s):
	Trace[939984059]: [30.000281006s] [30.000281006s] END
	E0127 13:19:10.626666       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0127 13:19:10.626985       1 trace.go:116] Trace[1474941318]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-01-27 13:18:40.626318698 +0000 UTC m=+0.030992467) (total time: 30.000650613s):
	Trace[1474941318]: [30.000650613s] [30.000650613s] END
	E0127 13:19:10.626997       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [9977eebb81ceefd915fb8ae9adc54fd8743f8453af41100c841a3b59d1f3e72c] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:33496 - 27897 "HINFO IN 7318361875637959600.346720018903427912. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.027240103s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-813213
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-813213
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0d71ce9b1959d04f0d9fa7dbc5639a49619ad89b
	                    minikube.k8s.io/name=old-k8s-version-813213
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T13_15_47_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 13:15:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-813213
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 13:24:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 13:19:38 +0000   Mon, 27 Jan 2025 13:15:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 13:19:38 +0000   Mon, 27 Jan 2025 13:15:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 13:19:38 +0000   Mon, 27 Jan 2025 13:15:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 13:19:38 +0000   Mon, 27 Jan 2025 13:16:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-813213
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 94fc26c7473b4334bb3a2d0d8ffd8ceb
	  System UUID:                9e7bbca2-cbb2-4a8e-b921-d413bc5671fa
	  Boot ID:                    9a2b5a8b-82ce-43cf-92bd-6297263d30a0
	  Kernel Version:             5.15.0-1075-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.24
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m40s
	  kube-system                 coredns-74ff55c5b-2phj4                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m23s
	  kube-system                 etcd-old-k8s-version-813213                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m32s
	  kube-system                 kindnet-h8gtn                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m23s
	  kube-system                 kube-apiserver-old-k8s-version-813213             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m31s
	  kube-system                 kube-controller-manager-old-k8s-version-813213    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m32s
	  kube-system                 kube-proxy-8gl5q                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m23s
	  kube-system                 kube-scheduler-old-k8s-version-813213             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m32s
	  kube-system                 metrics-server-9975d5f86-gkxmm                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m27s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m22s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-s2b59         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m32s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-r9xkk               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m51s (x5 over 8m51s)  kubelet     Node old-k8s-version-813213 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m51s (x5 over 8m51s)  kubelet     Node old-k8s-version-813213 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m51s (x5 over 8m51s)  kubelet     Node old-k8s-version-813213 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m51s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 8m32s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m32s                  kubelet     Node old-k8s-version-813213 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m32s                  kubelet     Node old-k8s-version-813213 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m32s                  kubelet     Node old-k8s-version-813213 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m32s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m23s                  kubelet     Node old-k8s-version-813213 status is now: NodeReady
	  Normal  Starting                 8m21s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 5m58s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m58s (x8 over 5m58s)  kubelet     Node old-k8s-version-813213 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m58s (x8 over 5m58s)  kubelet     Node old-k8s-version-813213 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m58s (x7 over 5m58s)  kubelet     Node old-k8s-version-813213 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m58s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m47s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	
	
	==> etcd [207271fc8e8b70d52e363f5f48188c01f5185e49f7efced711705901058b9f7f] <==
	2025-01-27 13:20:24.389003 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 13:20:34.389073 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 13:20:44.389148 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 13:20:54.388943 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 13:21:04.388977 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 13:21:14.389044 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 13:21:24.389017 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 13:21:34.389054 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 13:21:44.388976 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 13:21:54.388883 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 13:22:04.388929 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 13:22:14.389087 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 13:22:24.388996 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 13:22:34.388967 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 13:22:44.389238 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 13:22:54.389019 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 13:23:04.388888 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 13:23:14.388984 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 13:23:24.388935 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 13:23:34.388854 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 13:23:44.389355 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 13:23:54.388936 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 13:24:04.388918 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 13:24:14.389261 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 13:24:24.389888 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [f5b735d4310b352e1f2a8c5e9bc269f963d80c5a3e1cc6a8016c1873e8e598a5] <==
	raft2025/01/27 13:15:37 INFO: ea7e25599daad906 became leader at term 2
	raft2025/01/27 13:15:37 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2025-01-27 13:15:37.341247 I | etcdserver: setting up the initial cluster version to 3.4
	2025-01-27 13:15:37.341501 I | etcdserver: published {Name:old-k8s-version-813213 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2025-01-27 13:15:37.341587 I | embed: ready to serve client requests
	2025-01-27 13:15:37.343280 I | embed: ready to serve client requests
	2025-01-27 13:15:37.344653 I | embed: serving client requests on 192.168.76.2:2379
	2025-01-27 13:15:37.345000 N | etcdserver/membership: set the initial cluster version to 3.4
	2025-01-27 13:15:37.345668 I | embed: serving client requests on 127.0.0.1:2379
	2025-01-27 13:15:37.351849 I | etcdserver/api: enabled capabilities for version 3.4
	2025-01-27 13:15:46.035402 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 13:15:58.876011 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 13:16:05.679445 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 13:16:15.661312 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 13:16:25.659895 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 13:16:35.659938 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 13:16:45.659961 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 13:16:55.659919 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 13:17:05.660015 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 13:17:15.659924 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 13:17:25.659923 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 13:17:35.661498 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 13:17:45.659964 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 13:17:55.659864 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 13:18:00.744644 W | etcdserver: read-only range request "key:\"/registry/replicasets/kube-system/metrics-server-9975d5f86\" " with result "range_response_count:1 size:3177" took too long (112.606733ms) to execute
	
	
	==> kernel <==
	 13:24:28 up  6:06,  0 users,  load average: 1.84, 1.90, 2.41
	Linux old-k8s-version-813213 5.15.0-1075-aws #82~20.04.1-Ubuntu SMP Thu Dec 19 05:23:06 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [8f9ba8ca38617dd4c57c48f007e00144c96edc8377410ac33996e89c6c626ae7] <==
	I0127 13:16:09.821116       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0127 13:16:09.821159       1 metrics.go:61] Registering metrics
	I0127 13:16:09.821218       1 controller.go:401] Syncing nftables rules
	I0127 13:16:19.636925       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 13:16:19.637021       1 main.go:301] handling current node
	I0127 13:16:29.637231       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 13:16:29.637274       1 main.go:301] handling current node
	I0127 13:16:39.634155       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 13:16:39.634202       1 main.go:301] handling current node
	I0127 13:16:49.642776       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 13:16:49.642823       1 main.go:301] handling current node
	I0127 13:16:59.633942       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 13:16:59.634133       1 main.go:301] handling current node
	I0127 13:17:09.633931       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 13:17:09.633971       1 main.go:301] handling current node
	I0127 13:17:19.636517       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 13:17:19.636646       1 main.go:301] handling current node
	I0127 13:17:29.634970       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 13:17:29.635007       1 main.go:301] handling current node
	I0127 13:17:39.641107       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 13:17:39.641141       1 main.go:301] handling current node
	I0127 13:17:49.633966       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 13:17:49.634044       1 main.go:301] handling current node
	I0127 13:17:59.637180       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 13:17:59.637381       1 main.go:301] handling current node
	
	
	==> kindnet [98357c86477c2e61a73e06a97b5bda6b521689ebe81b9245a2adf7c79daceef7] <==
	I0127 13:22:20.740023       1 main.go:301] handling current node
	I0127 13:22:30.747933       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 13:22:30.747967       1 main.go:301] handling current node
	I0127 13:22:40.739941       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 13:22:40.739976       1 main.go:301] handling current node
	I0127 13:22:50.740553       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 13:22:50.740588       1 main.go:301] handling current node
	I0127 13:23:00.747234       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 13:23:00.747270       1 main.go:301] handling current node
	I0127 13:23:10.748055       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 13:23:10.748091       1 main.go:301] handling current node
	I0127 13:23:20.745633       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 13:23:20.745669       1 main.go:301] handling current node
	I0127 13:23:30.747015       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 13:23:30.747052       1 main.go:301] handling current node
	I0127 13:23:40.739646       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 13:23:40.739681       1 main.go:301] handling current node
	I0127 13:23:50.749078       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 13:23:50.749113       1 main.go:301] handling current node
	I0127 13:24:00.740275       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 13:24:00.740313       1 main.go:301] handling current node
	I0127 13:24:10.745857       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 13:24:10.745894       1 main.go:301] handling current node
	I0127 13:24:20.745998       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 13:24:20.746108       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9dc682ca643e0a8340eec4be4ed13358824df730d5eace4f73db56ba726b7da7] <==
	I0127 13:21:02.802170       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0127 13:21:02.802180       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0127 13:21:36.128873       1 client.go:360] parsed scheme: "passthrough"
	I0127 13:21:36.128916       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0127 13:21:36.128926       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0127 13:21:41.200797       1 handler_proxy.go:102] no RequestInfo found in the context
	E0127 13:21:41.201003       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0127 13:21:41.201021       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 13:22:15.903182       1 client.go:360] parsed scheme: "passthrough"
	I0127 13:22:15.903245       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0127 13:22:15.903255       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0127 13:22:46.363168       1 client.go:360] parsed scheme: "passthrough"
	I0127 13:22:46.363213       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0127 13:22:46.363222       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0127 13:23:16.586448       1 client.go:360] parsed scheme: "passthrough"
	I0127 13:23:16.586494       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0127 13:23:16.586504       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0127 13:23:38.719689       1 handler_proxy.go:102] no RequestInfo found in the context
	E0127 13:23:38.719892       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0127 13:23:38.719909       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 13:24:01.060064       1 client.go:360] parsed scheme: "passthrough"
	I0127 13:24:01.060111       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0127 13:24:01.060270       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [dbf3fe12cc514cdb259c989c31b00ebdd7cf91640d7cf28f457a8a8ee0d4a0ba] <==
	I0127 13:15:44.592367       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0127 13:15:44.592399       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0127 13:15:44.614684       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0127 13:15:44.619373       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0127 13:15:44.619400       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0127 13:15:45.176152       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0127 13:15:45.258905       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0127 13:15:45.337694       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0127 13:15:45.339180       1 controller.go:606] quota admission added evaluator for: endpoints
	I0127 13:15:45.343954       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0127 13:15:46.372083       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0127 13:15:47.241016       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0127 13:15:47.304958       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0127 13:15:55.633487       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0127 13:16:04.311708       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0127 13:16:04.363424       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0127 13:16:09.811897       1 client.go:360] parsed scheme: "passthrough"
	I0127 13:16:09.811935       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0127 13:16:09.811943       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0127 13:16:44.923182       1 client.go:360] parsed scheme: "passthrough"
	I0127 13:16:44.923227       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0127 13:16:44.923236       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0127 13:17:20.419921       1 client.go:360] parsed scheme: "passthrough"
	I0127 13:17:20.419968       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0127 13:17:20.419978       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [348496f0d58f24df8178fd9da80e0b658cd42c30cb16a5c7f5b4fed5e96bc2c6] <==
	W0127 13:20:02.491698       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0127 13:20:26.392427       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0127 13:20:34.142248       1 request.go:655] Throttling request took 1.048311763s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0127 13:20:34.993998       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0127 13:20:56.894862       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0127 13:21:06.644543       1 request.go:655] Throttling request took 1.048449654s, request: GET:https://192.168.76.2:8443/apis/autoscaling/v1?timeout=32s
	W0127 13:21:07.496400       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0127 13:21:27.396901       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0127 13:21:39.146990       1 request.go:655] Throttling request took 1.048432953s, request: GET:https://192.168.76.2:8443/apis/admissionregistration.k8s.io/v1beta1?timeout=32s
	W0127 13:21:39.998658       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0127 13:21:57.898738       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0127 13:22:11.648166       1 request.go:655] Throttling request took 1.048433278s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0127 13:22:12.499629       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0127 13:22:28.400714       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0127 13:22:44.150018       1 request.go:655] Throttling request took 1.04821663s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0127 13:22:45.011233       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0127 13:22:58.902511       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0127 13:23:16.663381       1 request.go:655] Throttling request took 1.048404408s, request: GET:https://192.168.76.2:8443/apis/authorization.k8s.io/v1?timeout=32s
	W0127 13:23:17.514848       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0127 13:23:29.404430       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0127 13:23:49.165292       1 request.go:655] Throttling request took 1.048519089s, request: GET:https://192.168.76.2:8443/apis/policy/v1beta1?timeout=32s
	W0127 13:23:50.017889       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0127 13:23:59.906325       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0127 13:24:21.668413       1 request.go:655] Throttling request took 1.04800496s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
	W0127 13:24:22.519901       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-controller-manager [fca0636811bd616b25b61e58c05bade814ad56fe3e02bfa2b5cfa584cbec2b13] <==
	I0127 13:16:04.338902       1 shared_informer.go:247] Caches are synced for taint 
	I0127 13:16:04.339077       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
	W0127 13:16:04.339208       1 node_lifecycle_controller.go:1044] Missing timestamp for Node old-k8s-version-813213. Assuming now as a timestamp.
	I0127 13:16:04.339260       1 node_lifecycle_controller.go:1245] Controller detected that zone  is now in state Normal.
	I0127 13:16:04.339485       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0127 13:16:04.339777       1 event.go:291] "Event occurred" object="old-k8s-version-813213" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-813213 event: Registered Node old-k8s-version-813213 in Controller"
	I0127 13:16:04.342860       1 shared_informer.go:247] Caches are synced for resource quota 
	I0127 13:16:04.403120       1 shared_informer.go:247] Caches are synced for disruption 
	I0127 13:16:04.403233       1 disruption.go:339] Sending events to api server.
	I0127 13:16:04.404360       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0127 13:16:04.407769       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-8gl5q"
	I0127 13:16:04.418603       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-h8gtn"
	I0127 13:16:04.417890       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	E0127 13:16:04.454778       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	I0127 13:16:04.455376       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-dn9t8"
	I0127 13:16:04.609682       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0127 13:16:04.610591       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-2phj4"
	E0127 13:16:04.611105       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"77adf47d-1e96-4003-80ff-c72f44ebaf58", ResourceVersion:"275", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63873580547, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4000f2ab80), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4000f2ad00)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0x4000f2ad40), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x40015da880), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000f2a
d60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000f2ad80), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4000f2ade0)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4000f500c0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0x40010b86b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000876b60), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400000eb30)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40010b8738)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0127 13:16:04.787893       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0127 13:16:04.787922       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0127 13:16:04.815799       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0127 13:16:05.690504       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0127 13:16:05.701644       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-dn9t8"
	I0127 13:17:59.336021       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	I0127 13:18:00.535364       1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-9975d5f86-gkxmm"
	
	
	==> kube-proxy [2cbee0b466a6c9c956ae17e795d58e2af64abe7cc2a2822686196036bfcbe2e6] <==
	I0127 13:16:06.800938       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0127 13:16:06.801058       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0127 13:16:06.822615       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0127 13:16:06.822906       1 server_others.go:185] Using iptables Proxier.
	I0127 13:16:06.823265       1 server.go:650] Version: v1.20.0
	I0127 13:16:06.824118       1 config.go:315] Starting service config controller
	I0127 13:16:06.824262       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0127 13:16:06.824410       1 config.go:224] Starting endpoint slice config controller
	I0127 13:16:06.824485       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0127 13:16:06.925236       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0127 13:16:06.925321       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-proxy [53392608d921e89b00a7a95975a674e420d59a11fbdfeb24bfb54ebe456ab676] <==
	I0127 13:18:40.686811       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0127 13:18:40.686887       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0127 13:18:40.711820       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0127 13:18:40.711925       1 server_others.go:185] Using iptables Proxier.
	I0127 13:18:40.712261       1 server.go:650] Version: v1.20.0
	I0127 13:18:40.714008       1 config.go:315] Starting service config controller
	I0127 13:18:40.718621       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0127 13:18:40.714803       1 config.go:224] Starting endpoint slice config controller
	I0127 13:18:40.718681       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0127 13:18:40.818822       1 shared_informer.go:247] Caches are synced for service config 
	I0127 13:18:40.818822       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [498c359719101282d021d8fef3d5d3ebaf865f947cfaafe3e0ea022abbc87f53] <==
	I0127 13:18:33.346910       1 serving.go:331] Generated self-signed cert in-memory
	I0127 13:18:38.613577       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0127 13:18:38.613613       1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0127 13:18:38.613663       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 13:18:38.613668       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 13:18:38.613689       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0127 13:18:38.613693       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0127 13:18:38.616306       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0127 13:18:38.616408       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0127 13:18:38.724910       1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController 
	I0127 13:18:38.725444       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0127 13:18:38.731430       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
	
	
	==> kube-scheduler [4b2832f8237f04564577430293388c25c8df7894084ea4413f6ef816262c7bbc] <==
	W0127 13:15:43.739321       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0127 13:15:43.741092       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0127 13:15:43.870584       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0127 13:15:43.870759       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0127 13:15:43.874977       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 13:15:43.875005       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0127 13:15:43.897553       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0127 13:15:43.897876       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 13:15:43.897967       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0127 13:15:43.898848       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0127 13:15:43.898962       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0127 13:15:43.898979       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 13:15:43.899051       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 13:15:43.899215       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0127 13:15:43.899282       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0127 13:15:43.899327       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0127 13:15:43.899365       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0127 13:15:43.921558       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 13:15:44.712198       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0127 13:15:44.824503       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0127 13:15:44.852101       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0127 13:15:44.857966       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 13:15:44.915577       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0127 13:15:45.156918       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0127 13:15:47.275146       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Jan 27 13:22:56 old-k8s-version-813213 kubelet[662]: E0127 13:22:56.568752     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	Jan 27 13:23:03 old-k8s-version-813213 kubelet[662]: E0127 13:23:03.569135     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 13:23:11 old-k8s-version-813213 kubelet[662]: I0127 13:23:11.569420     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4764648b74cc899aa13e2333e353bab858d41527b0eb96fb5851c7c92bb47ff0
	Jan 27 13:23:11 old-k8s-version-813213 kubelet[662]: E0127 13:23:11.569770     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	Jan 27 13:23:16 old-k8s-version-813213 kubelet[662]: E0127 13:23:16.569194     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 13:23:26 old-k8s-version-813213 kubelet[662]: I0127 13:23:26.568393     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4764648b74cc899aa13e2333e353bab858d41527b0eb96fb5851c7c92bb47ff0
	Jan 27 13:23:26 old-k8s-version-813213 kubelet[662]: E0127 13:23:26.568839     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	Jan 27 13:23:28 old-k8s-version-813213 kubelet[662]: E0127 13:23:28.569744     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 13:23:37 old-k8s-version-813213 kubelet[662]: I0127 13:23:37.568473     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4764648b74cc899aa13e2333e353bab858d41527b0eb96fb5851c7c92bb47ff0
	Jan 27 13:23:37 old-k8s-version-813213 kubelet[662]: E0127 13:23:37.568849     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	Jan 27 13:23:41 old-k8s-version-813213 kubelet[662]: E0127 13:23:41.573539     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 13:23:49 old-k8s-version-813213 kubelet[662]: I0127 13:23:49.568827     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4764648b74cc899aa13e2333e353bab858d41527b0eb96fb5851c7c92bb47ff0
	Jan 27 13:23:49 old-k8s-version-813213 kubelet[662]: E0127 13:23:49.569246     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	Jan 27 13:23:54 old-k8s-version-813213 kubelet[662]: E0127 13:23:54.569362     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 13:24:03 old-k8s-version-813213 kubelet[662]: I0127 13:24:03.568465     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4764648b74cc899aa13e2333e353bab858d41527b0eb96fb5851c7c92bb47ff0
	Jan 27 13:24:03 old-k8s-version-813213 kubelet[662]: E0127 13:24:03.568788     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	Jan 27 13:24:05 old-k8s-version-813213 kubelet[662]: E0127 13:24:05.569484     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 13:24:14 old-k8s-version-813213 kubelet[662]: I0127 13:24:14.568558     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4764648b74cc899aa13e2333e353bab858d41527b0eb96fb5851c7c92bb47ff0
	Jan 27 13:24:14 old-k8s-version-813213 kubelet[662]: E0127 13:24:14.569118     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	Jan 27 13:24:18 old-k8s-version-813213 kubelet[662]: E0127 13:24:18.597378     662 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Jan 27 13:24:18 old-k8s-version-813213 kubelet[662]: E0127 13:24:18.597431     662 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Jan 27 13:24:18 old-k8s-version-813213 kubelet[662]: E0127 13:24:18.598868     662 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-8rbgc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-gkxmm_kube-system(00aea83
b-5c4a-48d5-b920-1fe2854717a0): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Jan 27 13:24:18 old-k8s-version-813213 kubelet[662]: E0127 13:24:18.598916     662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Jan 27 13:24:28 old-k8s-version-813213 kubelet[662]: I0127 13:24:28.568435     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4764648b74cc899aa13e2333e353bab858d41527b0eb96fb5851c7c92bb47ff0
	Jan 27 13:24:28 old-k8s-version-813213 kubelet[662]: E0127 13:24:28.568770     662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
	
	
	==> kubernetes-dashboard [84b4623c8ca9caacba41fa6535bc6f3dc8e0c52347d8ef68c993a81a7da957d1] <==
	2025/01/27 13:19:03 Using namespace: kubernetes-dashboard
	2025/01/27 13:19:03 Using in-cluster config to connect to apiserver
	2025/01/27 13:19:03 Using secret token for csrf signing
	2025/01/27 13:19:03 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/01/27 13:19:03 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/01/27 13:19:04 Successful initial request to the apiserver, version: v1.20.0
	2025/01/27 13:19:04 Generating JWE encryption key
	2025/01/27 13:19:04 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/01/27 13:19:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/01/27 13:19:04 Initializing JWE encryption key from synchronized object
	2025/01/27 13:19:04 Creating in-cluster Sidecar client
	2025/01/27 13:19:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:19:04 Serving insecurely on HTTP port: 9090
	2025/01/27 13:19:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:20:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:20:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:21:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:21:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:22:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:22:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:23:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:23:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:24:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:19:03 Starting overwatch
	
	
	==> storage-provisioner [eb3408e2536487de3f2f4d60740cac93d750307a3768323cc24df540e774eb8c] <==
	I0127 13:18:40.611598       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0127 13:19:10.623925       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ffc35dde525c7cc22c06eeef64a37735aeec2d5e82f29e5ae8ca6de1093abde9] <==
	I0127 13:19:23.704351       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 13:19:23.736954       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 13:19:23.737187       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 13:19:41.202470       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 13:19:41.202677       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"727bf6f7-918a-47a1-abfe-871409cd83da", APIVersion:"v1", ResourceVersion:"857", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-813213_b97c09ba-f010-48db-a362-6cfb89fbb038 became leader
	I0127 13:19:41.202993       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-813213_b97c09ba-f010-48db-a362-6cfb89fbb038!
	I0127 13:19:41.303107       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-813213_b97c09ba-f010-48db-a362-6cfb89fbb038!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-813213 -n old-k8s-version-813213
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-813213 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-gkxmm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-813213 describe pod metrics-server-9975d5f86-gkxmm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-813213 describe pod metrics-server-9975d5f86-gkxmm: exit status 1 (137.10156ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-gkxmm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-813213 describe pod metrics-server-9975d5f86-gkxmm: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (376.37s)

                                                
                                    

Test pass (300/330)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 5.92
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.32.1/json-events 5.95
13 TestDownloadOnly/v1.32.1/preload-exists 0
17 TestDownloadOnly/v1.32.1/LogsDuration 0.09
18 TestDownloadOnly/v1.32.1/DeleteAll 0.22
19 TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.59
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 218.96
29 TestAddons/serial/Volcano 40.98
31 TestAddons/serial/GCPAuth/Namespaces 0.18
32 TestAddons/serial/GCPAuth/FakeCredentials 9.86
35 TestAddons/parallel/Registry 16.73
36 TestAddons/parallel/Ingress 19.84
37 TestAddons/parallel/InspektorGadget 11.84
38 TestAddons/parallel/MetricsServer 6.91
40 TestAddons/parallel/CSI 41.78
41 TestAddons/parallel/Headlamp 16.96
42 TestAddons/parallel/CloudSpanner 6.59
43 TestAddons/parallel/LocalPath 55.07
44 TestAddons/parallel/NvidiaDevicePlugin 5.98
45 TestAddons/parallel/Yakd 11.86
47 TestAddons/StoppedEnableDisable 12.2
48 TestCertOptions 35.01
49 TestCertExpiration 231.81
51 TestForceSystemdFlag 46.73
52 TestForceSystemdEnv 46.3
53 TestDockerEnvContainerd 44.12
58 TestErrorSpam/setup 29.08
59 TestErrorSpam/start 0.83
60 TestErrorSpam/status 1.13
61 TestErrorSpam/pause 1.85
62 TestErrorSpam/unpause 1.91
63 TestErrorSpam/stop 2.06
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 49.12
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.18
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 4.16
75 TestFunctional/serial/CacheCmd/cache/add_local 1.39
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
77 TestFunctional/serial/CacheCmd/cache/list 0.07
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.04
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.15
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
83 TestFunctional/serial/ExtraConfig 41.89
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.68
86 TestFunctional/serial/LogsFileCmd 1.76
87 TestFunctional/serial/InvalidService 4.35
89 TestFunctional/parallel/ConfigCmd 0.5
90 TestFunctional/parallel/DashboardCmd 16.15
91 TestFunctional/parallel/DryRun 0.44
92 TestFunctional/parallel/InternationalLanguage 0.26
93 TestFunctional/parallel/StatusCmd 1.12
97 TestFunctional/parallel/ServiceCmdConnect 12.65
98 TestFunctional/parallel/AddonsCmd 0.2
99 TestFunctional/parallel/PersistentVolumeClaim 26.12
101 TestFunctional/parallel/SSHCmd 0.69
102 TestFunctional/parallel/CpCmd 2.25
104 TestFunctional/parallel/FileSync 0.27
105 TestFunctional/parallel/CertSync 1.81
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.56
113 TestFunctional/parallel/License 0.25
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.68
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.44
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/ServiceCmd/DeployApp 7.22
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
127 TestFunctional/parallel/ProfileCmd/profile_list 0.42
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
129 TestFunctional/parallel/MountCmd/any-port 8.17
130 TestFunctional/parallel/ServiceCmd/List 0.62
131 TestFunctional/parallel/ServiceCmd/JSONOutput 0.63
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.41
133 TestFunctional/parallel/ServiceCmd/Format 0.38
134 TestFunctional/parallel/ServiceCmd/URL 0.39
135 TestFunctional/parallel/MountCmd/specific-port 1.72
136 TestFunctional/parallel/MountCmd/VerifyCleanup 1.21
137 TestFunctional/parallel/Version/short 0.1
138 TestFunctional/parallel/Version/components 1.4
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
143 TestFunctional/parallel/ImageCommands/ImageBuild 4.12
144 TestFunctional/parallel/ImageCommands/Setup 1.29
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.28
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.42
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.58
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.42
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.57
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.88
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.46
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 108.09
162 TestMultiControlPlane/serial/DeployApp 32.88
163 TestMultiControlPlane/serial/PingHostFromPods 1.66
164 TestMultiControlPlane/serial/AddWorkerNode 22.01
165 TestMultiControlPlane/serial/NodeLabels 0.14
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.02
167 TestMultiControlPlane/serial/CopyFile 19.46
168 TestMultiControlPlane/serial/StopSecondaryNode 12.86
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.78
170 TestMultiControlPlane/serial/RestartSecondaryNode 19.32
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 132.04
173 TestMultiControlPlane/serial/DeleteSecondaryNode 10.49
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.77
175 TestMultiControlPlane/serial/StopCluster 35.91
176 TestMultiControlPlane/serial/RestartCluster 71.31
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.79
178 TestMultiControlPlane/serial/AddSecondaryNode 44.31
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.34
183 TestJSONOutput/start/Command 78.02
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.74
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.66
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 5.73
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.24
208 TestKicCustomNetwork/create_custom_network 40.29
209 TestKicCustomNetwork/use_default_bridge_network 33.43
210 TestKicExistingNetwork 33.32
211 TestKicCustomSubnet 31.79
212 TestKicStaticIP 33.26
213 TestMainNoArgs 0.05
214 TestMinikubeProfile 69.28
217 TestMountStart/serial/StartWithMountFirst 6.39
218 TestMountStart/serial/VerifyMountFirst 0.27
219 TestMountStart/serial/StartWithMountSecond 8.47
220 TestMountStart/serial/VerifyMountSecond 0.26
221 TestMountStart/serial/DeleteFirst 1.63
222 TestMountStart/serial/VerifyMountPostDelete 0.27
223 TestMountStart/serial/Stop 1.2
224 TestMountStart/serial/RestartStopped 7.76
225 TestMountStart/serial/VerifyMountPostStop 0.27
228 TestMultiNode/serial/FreshStart2Nodes 92.75
229 TestMultiNode/serial/DeployApp2Nodes 15.81
230 TestMultiNode/serial/PingHostFrom2Pods 1.02
231 TestMultiNode/serial/AddNode 18.65
232 TestMultiNode/serial/MultiNodeLabels 0.09
233 TestMultiNode/serial/ProfileList 0.74
234 TestMultiNode/serial/CopyFile 9.98
235 TestMultiNode/serial/StopNode 2.25
236 TestMultiNode/serial/StartAfterStop 9.65
237 TestMultiNode/serial/RestartKeepsNodes 86.69
238 TestMultiNode/serial/DeleteNode 5.35
239 TestMultiNode/serial/StopMultiNode 23.85
240 TestMultiNode/serial/RestartMultiNode 56.31
241 TestMultiNode/serial/ValidateNameConflict 36.47
246 TestPreload 120.88
248 TestScheduledStopUnix 107.86
251 TestInsufficientStorage 10.91
252 TestRunningBinaryUpgrade 94.5
254 TestKubernetesUpgrade 97.91
255 TestMissingContainerUpgrade 181.94
257 TestPause/serial/Start 83.91
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
260 TestNoKubernetes/serial/StartWithK8s 42.58
261 TestNoKubernetes/serial/StartWithStopK8s 16.64
262 TestNoKubernetes/serial/Start 5.24
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
264 TestNoKubernetes/serial/ProfileList 1.17
265 TestNoKubernetes/serial/Stop 1.22
266 TestNoKubernetes/serial/StartNoArgs 6.69
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
268 TestPause/serial/SecondStartNoReconfiguration 6.02
269 TestPause/serial/Pause 0.95
270 TestPause/serial/VerifyStatus 0.52
271 TestPause/serial/Unpause 0.93
272 TestPause/serial/PauseAgain 1.14
273 TestPause/serial/DeletePaused 3.73
274 TestPause/serial/VerifyDeletedResources 0.17
275 TestStoppedBinaryUpgrade/Setup 0.61
276 TestStoppedBinaryUpgrade/Upgrade 111.72
277 TestStoppedBinaryUpgrade/MinikubeLogs 1.28
292 TestNetworkPlugins/group/false 5.25
297 TestStartStop/group/old-k8s-version/serial/FirstStart 167.41
299 TestStartStop/group/no-preload/serial/FirstStart 63.53
300 TestStartStop/group/old-k8s-version/serial/DeployApp 10.75
301 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.98
302 TestStartStop/group/old-k8s-version/serial/Stop 12.34
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.28
305 TestStartStop/group/no-preload/serial/DeployApp 8.39
306 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.17
307 TestStartStop/group/no-preload/serial/Stop 12.6
308 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
309 TestStartStop/group/no-preload/serial/SecondStart 288.69
310 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
311 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.1
312 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
313 TestStartStop/group/no-preload/serial/Pause 3.11
315 TestStartStop/group/embed-certs/serial/FirstStart 92.79
316 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
317 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.15
318 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.42
319 TestStartStop/group/old-k8s-version/serial/Pause 4.38
321 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 55.62
322 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.37
323 TestStartStop/group/embed-certs/serial/DeployApp 8.53
324 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.1
325 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.14
326 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.33
327 TestStartStop/group/embed-certs/serial/Stop 11.92
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 292.45
330 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
331 TestStartStop/group/embed-certs/serial/SecondStart 271.88
332 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
333 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
334 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
335 TestStartStop/group/embed-certs/serial/Pause 3.08
337 TestStartStop/group/newest-cni/serial/FirstStart 45.34
338 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
339 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.11
340 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
341 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.1
342 TestNetworkPlugins/group/auto/Start 96.79
343 TestStartStop/group/newest-cni/serial/DeployApp 0
344 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.32
345 TestStartStop/group/newest-cni/serial/Stop 1.28
346 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.24
347 TestStartStop/group/newest-cni/serial/SecondStart 21.27
348 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
349 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
350 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.3
351 TestStartStop/group/newest-cni/serial/Pause 2.96
352 TestNetworkPlugins/group/kindnet/Start 64.04
353 TestNetworkPlugins/group/auto/KubeletFlags 0.32
354 TestNetworkPlugins/group/auto/NetCatPod 10.32
355 TestNetworkPlugins/group/auto/DNS 0.2
356 TestNetworkPlugins/group/auto/Localhost 0.18
357 TestNetworkPlugins/group/auto/HairPin 0.18
358 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
359 TestNetworkPlugins/group/kindnet/KubeletFlags 0.4
360 TestNetworkPlugins/group/kindnet/NetCatPod 11.35
361 TestNetworkPlugins/group/calico/Start 72.88
362 TestNetworkPlugins/group/kindnet/DNS 0.24
363 TestNetworkPlugins/group/kindnet/Localhost 0.36
364 TestNetworkPlugins/group/kindnet/HairPin 0.54
365 TestNetworkPlugins/group/custom-flannel/Start 58.02
366 TestNetworkPlugins/group/calico/ControllerPod 6.01
367 TestNetworkPlugins/group/calico/KubeletFlags 0.33
368 TestNetworkPlugins/group/calico/NetCatPod 10.26
369 TestNetworkPlugins/group/calico/DNS 0.2
370 TestNetworkPlugins/group/calico/Localhost 0.18
371 TestNetworkPlugins/group/calico/HairPin 0.27
372 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.42
373 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.36
374 TestNetworkPlugins/group/custom-flannel/DNS 0.33
375 TestNetworkPlugins/group/custom-flannel/Localhost 0.22
376 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
377 TestNetworkPlugins/group/enable-default-cni/Start 55.41
378 TestNetworkPlugins/group/flannel/Start 56.13
379 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
380 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.42
381 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
382 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
383 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
384 TestNetworkPlugins/group/flannel/ControllerPod 6.01
385 TestNetworkPlugins/group/flannel/KubeletFlags 0.41
386 TestNetworkPlugins/group/flannel/NetCatPod 10.35
387 TestNetworkPlugins/group/flannel/DNS 0.23
388 TestNetworkPlugins/group/flannel/Localhost 0.2
389 TestNetworkPlugins/group/bridge/Start 46.59
390 TestNetworkPlugins/group/flannel/HairPin 0.23
391 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
392 TestNetworkPlugins/group/bridge/NetCatPod 9.27
393 TestNetworkPlugins/group/bridge/DNS 0.17
394 TestNetworkPlugins/group/bridge/Localhost 0.15
395 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (5.92s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-152125 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-152125 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.924059794s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (5.92s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0127 12:31:46.255963 1186773 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0127 12:31:46.256047 1186773 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20317-1181389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-152125
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-152125: exit status 85 (85.125697ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-152125 | jenkins | v1.35.0 | 27 Jan 25 12:31 UTC |          |
	|         | -p download-only-152125        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 12:31:40
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 12:31:40.378535 1186778 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:31:40.378660 1186778 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:31:40.378677 1186778 out.go:358] Setting ErrFile to fd 2...
	I0127 12:31:40.378683 1186778 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:31:40.378997 1186778 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-1181389/.minikube/bin
	W0127 12:31:40.379161 1186778 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20317-1181389/.minikube/config/config.json: open /home/jenkins/minikube-integration/20317-1181389/.minikube/config/config.json: no such file or directory
	I0127 12:31:40.379605 1186778 out.go:352] Setting JSON to true
	I0127 12:31:40.380486 1186778 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":18846,"bootTime":1737962255,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0127 12:31:40.380774 1186778 start.go:139] virtualization:  
	I0127 12:31:40.384886 1186778 out.go:97] [download-only-152125] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	W0127 12:31:40.385074 1186778 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20317-1181389/.minikube/cache/preloaded-tarball: no such file or directory
	I0127 12:31:40.385107 1186778 notify.go:220] Checking for updates...
	I0127 12:31:40.387806 1186778 out.go:169] MINIKUBE_LOCATION=20317
	I0127 12:31:40.390966 1186778 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:31:40.393585 1186778 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20317-1181389/kubeconfig
	I0127 12:31:40.396130 1186778 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-1181389/.minikube
	I0127 12:31:40.398764 1186778 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0127 12:31:40.403856 1186778 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0127 12:31:40.404113 1186778 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:31:40.430864 1186778 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 12:31:40.430966 1186778 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 12:31:40.492622 1186778 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-27 12:31:40.484015351 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 12:31:40.492732 1186778 docker.go:318] overlay module found
	I0127 12:31:40.495482 1186778 out.go:97] Using the docker driver based on user configuration
	I0127 12:31:40.495517 1186778 start.go:297] selected driver: docker
	I0127 12:31:40.495525 1186778 start.go:901] validating driver "docker" against <nil>
	I0127 12:31:40.495644 1186778 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 12:31:40.545469 1186778 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-27 12:31:40.536567705 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 12:31:40.545682 1186778 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 12:31:40.546004 1186778 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0127 12:31:40.546168 1186778 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 12:31:40.548982 1186778 out.go:169] Using Docker driver with root privileges
	I0127 12:31:40.551556 1186778 cni.go:84] Creating CNI manager for ""
	I0127 12:31:40.551637 1186778 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0127 12:31:40.551651 1186778 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0127 12:31:40.551745 1186778 start.go:340] cluster config:
	{Name:download-only-152125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-152125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:31:40.554335 1186778 out.go:97] Starting "download-only-152125" primary control-plane node in "download-only-152125" cluster
	I0127 12:31:40.554356 1186778 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0127 12:31:40.556820 1186778 out.go:97] Pulling base image v0.0.46 ...
	I0127 12:31:40.556844 1186778 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0127 12:31:40.556992 1186778 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0127 12:31:40.572767 1186778 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0127 12:31:40.573365 1186778 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory
	I0127 12:31:40.573473 1186778 image.go:150] Writing gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0127 12:31:40.617253 1186778 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0127 12:31:40.617280 1186778 cache.go:56] Caching tarball of preloaded images
	I0127 12:31:40.617425 1186778 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0127 12:31:40.620486 1186778 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0127 12:31:40.620507 1186778 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0127 12:31:40.704048 1186778 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/20317-1181389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0127 12:31:44.350882 1186778 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0127 12:31:44.350976 1186778 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20317-1181389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0127 12:31:44.761188 1186778 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 as a tarball
	
	
	* The control-plane node download-only-152125 host does not exist
	  To start a cluster, run: "minikube start -p download-only-152125"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-152125
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/json-events (5.95s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-130698 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-130698 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.94823194s)
--- PASS: TestDownloadOnly/v1.32.1/json-events (5.95s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/preload-exists
I0127 12:31:52.643713 1186773 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
I0127 12:31:52.643756 1186773 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20317-1181389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-130698
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-130698: exit status 85 (84.719048ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-152125 | jenkins | v1.35.0 | 27 Jan 25 12:31 UTC |                     |
	|         | -p download-only-152125        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 27 Jan 25 12:31 UTC | 27 Jan 25 12:31 UTC |
	| delete  | -p download-only-152125        | download-only-152125 | jenkins | v1.35.0 | 27 Jan 25 12:31 UTC | 27 Jan 25 12:31 UTC |
	| start   | -o=json --download-only        | download-only-130698 | jenkins | v1.35.0 | 27 Jan 25 12:31 UTC |                     |
	|         | -p download-only-130698        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 12:31:46
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 12:31:46.739496 1186979 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:31:46.739642 1186979 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:31:46.739653 1186979 out.go:358] Setting ErrFile to fd 2...
	I0127 12:31:46.739659 1186979 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:31:46.739992 1186979 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-1181389/.minikube/bin
	I0127 12:31:46.740452 1186979 out.go:352] Setting JSON to true
	I0127 12:31:46.741356 1186979 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":18852,"bootTime":1737962255,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0127 12:31:46.741458 1186979 start.go:139] virtualization:  
	I0127 12:31:46.744675 1186979 out.go:97] [download-only-130698] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0127 12:31:46.744883 1186979 notify.go:220] Checking for updates...
	I0127 12:31:46.747345 1186979 out.go:169] MINIKUBE_LOCATION=20317
	I0127 12:31:46.749723 1186979 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:31:46.752240 1186979 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20317-1181389/kubeconfig
	I0127 12:31:46.754755 1186979 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-1181389/.minikube
	I0127 12:31:46.757384 1186979 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0127 12:31:46.762352 1186979 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0127 12:31:46.762606 1186979 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:31:46.794267 1186979 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 12:31:46.794378 1186979 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 12:31:46.851512 1186979 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-01-27 12:31:46.842288203 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 12:31:46.851631 1186979 docker.go:318] overlay module found
	I0127 12:31:46.854339 1186979 out.go:97] Using the docker driver based on user configuration
	I0127 12:31:46.854371 1186979 start.go:297] selected driver: docker
	I0127 12:31:46.854379 1186979 start.go:901] validating driver "docker" against <nil>
	I0127 12:31:46.854498 1186979 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 12:31:46.904474 1186979 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-01-27 12:31:46.895215711 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 12:31:46.904700 1186979 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 12:31:46.904991 1186979 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0127 12:31:46.905179 1186979 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 12:31:46.908022 1186979 out.go:169] Using Docker driver with root privileges
	I0127 12:31:46.910482 1186979 cni.go:84] Creating CNI manager for ""
	I0127 12:31:46.910554 1186979 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0127 12:31:46.910566 1186979 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0127 12:31:46.910659 1186979 start.go:340] cluster config:
	{Name:download-only-130698 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:download-only-130698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:31:46.913336 1186979 out.go:97] Starting "download-only-130698" primary control-plane node in "download-only-130698" cluster
	I0127 12:31:46.913369 1186979 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0127 12:31:46.915887 1186979 out.go:97] Pulling base image v0.0.46 ...
	I0127 12:31:46.915933 1186979 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 12:31:46.916035 1186979 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0127 12:31:46.932123 1186979 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0127 12:31:46.932261 1186979 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory
	I0127 12:31:46.932281 1186979 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory, skipping pull
	I0127 12:31:46.932286 1186979 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in cache, skipping pull
	I0127 12:31:46.932294 1186979 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 as a tarball
	I0127 12:31:46.972032 1186979 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4
	I0127 12:31:46.972056 1186979 cache.go:56] Caching tarball of preloaded images
	I0127 12:31:46.972232 1186979 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 12:31:46.975086 1186979 out.go:97] Downloading Kubernetes v1.32.1 preload ...
	I0127 12:31:46.975107 1186979 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4 ...
	I0127 12:31:47.056991 1186979 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:3dfa1a6dfbdb6fd11337c34d558e517e -> /home/jenkins/minikube-integration/20317-1181389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4
	I0127 12:31:51.058005 1186979 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4 ...
	I0127 12:31:51.058121 1186979 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20317-1181389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-130698 host does not exist
	  To start a cluster, run: "minikube start -p download-only-130698"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.32.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-130698
--- PASS: TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I0127 12:31:53.967939 1186773 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-036939 --alsologtostderr --binary-mirror http://127.0.0.1:44667 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-036939" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-036939
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-453723
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-453723: exit status 85 (78.124871ms)

                                                
                                                
-- stdout --
	* Profile "addons-453723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-453723"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-453723
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-453723: exit status 85 (71.894549ms)

                                                
                                                
-- stdout --
	* Profile "addons-453723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-453723"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (218.96s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-453723 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-453723 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m38.956675413s)
--- PASS: TestAddons/Setup (218.96s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.98s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:823: volcano-controller stabilized in 65.704745ms
addons_test.go:807: volcano-scheduler stabilized in 65.809357ms
addons_test.go:815: volcano-admission stabilized in 65.856421ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-7ff7cd6989-vgcgp" [ca9523ed-4bb6-4dfb-8eca-2ddd7013c123] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004100604s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-57676bd54c-db96t" [ffc9a952-d273-42da-acfa-0feb59a9d3df] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003402991s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-77df547cdf-mrt4t" [05671ede-0fa8-4c55-ad54-989823a70318] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003851439s
addons_test.go:842: (dbg) Run:  kubectl --context addons-453723 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-453723 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-453723 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [0fdcc731-0e6e-4b86-9c50-8698959f933c] Pending
helpers_test.go:344: "test-job-nginx-0" [0fdcc731-0e6e-4b86-9c50-8698959f933c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [0fdcc731-0e6e-4b86-9c50-8698959f933c] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.003629892s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-453723 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-453723 addons disable volcano --alsologtostderr -v=1: (11.323115769s)
--- PASS: TestAddons/serial/Volcano (40.98s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-453723 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-453723 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.86s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-453723 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-453723 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [06cd7ad7-9f84-431a-9486-7990eff4e629] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [06cd7ad7-9f84-431a-9486-7990eff4e629] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004641134s
addons_test.go:633: (dbg) Run:  kubectl --context addons-453723 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-453723 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-453723 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-453723 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.86s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 6.314963ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-qn2k5" [63da1c55-af0c-4bc0-addb-b3d9a76b7871] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004558053s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-9m9j6" [2cb584e2-4a22-431e-8d0c-b0e780aaacd6] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004012559s
addons_test.go:331: (dbg) Run:  kubectl --context addons-453723 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-453723 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-453723 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.728268675s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-453723 ip
2025/01/27 12:36:49 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-453723 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.73s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-453723 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-453723 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-453723 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a54c83c2-0912-46ec-b2be-7f4bd1daabc9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a54c83c2-0912-46ec-b2be-7f4bd1daabc9] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.00383391s
I0127 12:38:06.423135 1186773 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-453723 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-453723 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-453723 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-453723 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-453723 addons disable ingress-dns --alsologtostderr -v=1: (1.425810683s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-453723 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-453723 addons disable ingress --alsologtostderr -v=1: (7.793165335s)
--- PASS: TestAddons/parallel/Ingress (19.84s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.84s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-t4pd7" [09fa45ff-31c7-432d-bf78-a0c593110cd7] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005255591s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-453723 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-453723 addons disable inspektor-gadget --alsologtostderr -v=1: (5.829619449s)
--- PASS: TestAddons/parallel/InspektorGadget (11.84s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.91s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 12.004839ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-rtmq5" [c17ca18c-f2c2-4da1-b619-5bbb7957bc29] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003359478s
addons_test.go:402: (dbg) Run:  kubectl --context addons-453723 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-453723 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.91s)

                                                
                                    
x
+
TestAddons/parallel/CSI (41.78s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0127 12:37:15.063630 1186773 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0127 12:37:15.069476 1186773 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0127 12:37:15.069510 1186773 kapi.go:107] duration metric: took 8.361128ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 8.371384ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-453723 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453723 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453723 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453723 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-453723 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [d7573547-944e-409d-a42b-cb39b940ca22] Pending
helpers_test.go:344: "task-pv-pod" [d7573547-944e-409d-a42b-cb39b940ca22] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [d7573547-944e-409d-a42b-cb39b940ca22] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003187052s
addons_test.go:511: (dbg) Run:  kubectl --context addons-453723 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-453723 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-453723 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-453723 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-453723 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-453723 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453723 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453723 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453723 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453723 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453723 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453723 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453723 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453723 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453723 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453723 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453723 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453723 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453723 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-453723 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [90c7f2d4-22d2-4c24-9b9c-4fe50122adc2] Pending
helpers_test.go:344: "task-pv-pod-restore" [90c7f2d4-22d2-4c24-9b9c-4fe50122adc2] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004716821s
addons_test.go:553: (dbg) Run:  kubectl --context addons-453723 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-453723 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-453723 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-453723 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-453723 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-453723 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.857254145s)
--- PASS: TestAddons/parallel/CSI (41.78s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-453723 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-453723 --alsologtostderr -v=1: (1.144159311s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-69d78d796f-gm85w" [bfec4177-0ff2-4673-8395-2c7d5401543d] Pending
helpers_test.go:344: "headlamp-69d78d796f-gm85w" [bfec4177-0ff2-4673-8395-2c7d5401543d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-69d78d796f-gm85w" [bfec4177-0ff2-4673-8395-2c7d5401543d] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003713811s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-453723 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-453723 addons disable headlamp --alsologtostderr -v=1: (5.81014173s)
--- PASS: TestAddons/parallel/Headlamp (16.96s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-fqszk" [d8d886f9-f16a-474b-bf19-f67cbb40beda] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.00431584s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-453723 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.07s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-453723 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-453723 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453723 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453723 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453723 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453723 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453723 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453723 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [c9cd9e55-acae-4e34-a25c-eba31fd41841] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [c9cd9e55-acae-4e34-a25c-eba31fd41841] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [c9cd9e55-acae-4e34-a25c-eba31fd41841] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004321907s
addons_test.go:906: (dbg) Run:  kubectl --context addons-453723 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-arm64 -p addons-453723 ssh "cat /opt/local-path-provisioner/pvc-5552c6ff-e9b1-457e-9cc8-3bb87f2786f7_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-453723 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-453723 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-453723 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-453723 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.608658421s)
--- PASS: TestAddons/parallel/LocalPath (55.07s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.98s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-fr4jk" [40f62e8e-6587-4a86-aa81-5aa4f1c5a343] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004521454s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-453723 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.98s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-rh5ch" [2b58edf7-036e-4de5-933b-04a7ee297a90] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004439952s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-453723 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-453723 addons disable yakd --alsologtostderr -v=1: (5.858031609s)
--- PASS: TestAddons/parallel/Yakd (11.86s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.2s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-453723
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-453723: (11.91643456s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-453723
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-453723
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-453723
--- PASS: TestAddons/StoppedEnableDisable (12.20s)

                                                
                                    
x
+
TestCertOptions (35.01s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-511343 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-511343 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (32.344544337s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-511343 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-511343 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-511343 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-511343" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-511343
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-511343: (1.998313367s)
--- PASS: TestCertOptions (35.01s)

                                                
                                    
x
+
TestCertExpiration (231.81s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-135138 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-135138 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (41.485899726s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-135138 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-135138 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (8.003953036s)
helpers_test.go:175: Cleaning up "cert-expiration-135138" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-135138
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-135138: (2.314653138s)
--- PASS: TestCertExpiration (231.81s)

                                                
                                    
x
+
TestForceSystemdFlag (46.73s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-334749 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-334749 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (43.944129335s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-334749 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-334749" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-334749
E0127 13:13:36.700930 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/addons-453723/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-334749: (2.272404389s)
--- PASS: TestForceSystemdFlag (46.73s)

                                                
                                    
x
+
TestForceSystemdEnv (46.3s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-852325 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-852325 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (43.043670054s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-852325 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-852325" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-852325
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-852325: (2.763934917s)
--- PASS: TestForceSystemdEnv (46.30s)

                                                
                                    
x
+
TestDockerEnvContainerd (44.12s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-908963 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-908963 --driver=docker  --container-runtime=containerd: (28.474295509s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-908963"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-908963": (1.004072733s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-a7IRRHIGo8Bf/agent.1207628" SSH_AGENT_PID="1207629" DOCKER_HOST=ssh://docker@127.0.0.1:33937 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-a7IRRHIGo8Bf/agent.1207628" SSH_AGENT_PID="1207629" DOCKER_HOST=ssh://docker@127.0.0.1:33937 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-a7IRRHIGo8Bf/agent.1207628" SSH_AGENT_PID="1207629" DOCKER_HOST=ssh://docker@127.0.0.1:33937 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.204916066s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-a7IRRHIGo8Bf/agent.1207628" SSH_AGENT_PID="1207629" DOCKER_HOST=ssh://docker@127.0.0.1:33937 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-908963" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-908963
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-908963: (2.027573595s)
--- PASS: TestDockerEnvContainerd (44.12s)

                                                
                                    
x
+
TestErrorSpam/setup (29.08s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-128534 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-128534 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-128534 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-128534 --driver=docker  --container-runtime=containerd: (29.074744349s)
--- PASS: TestErrorSpam/setup (29.08s)

                                                
                                    
x
+
TestErrorSpam/start (0.83s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-128534 --log_dir /tmp/nospam-128534 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-128534 --log_dir /tmp/nospam-128534 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-128534 --log_dir /tmp/nospam-128534 start --dry-run
--- PASS: TestErrorSpam/start (0.83s)

                                                
                                    
x
+
TestErrorSpam/status (1.13s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-128534 --log_dir /tmp/nospam-128534 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-128534 --log_dir /tmp/nospam-128534 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-128534 --log_dir /tmp/nospam-128534 status
--- PASS: TestErrorSpam/status (1.13s)

                                                
                                    
x
+
TestErrorSpam/pause (1.85s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-128534 --log_dir /tmp/nospam-128534 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-128534 --log_dir /tmp/nospam-128534 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-128534 --log_dir /tmp/nospam-128534 pause
--- PASS: TestErrorSpam/pause (1.85s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.91s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-128534 --log_dir /tmp/nospam-128534 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-128534 --log_dir /tmp/nospam-128534 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-128534 --log_dir /tmp/nospam-128534 unpause
--- PASS: TestErrorSpam/unpause (1.91s)

                                                
                                    
x
+
TestErrorSpam/stop (2.06s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-128534 --log_dir /tmp/nospam-128534 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-128534 --log_dir /tmp/nospam-128534 stop: (1.850668607s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-128534 --log_dir /tmp/nospam-128534 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-128534 --log_dir /tmp/nospam-128534 stop
--- PASS: TestErrorSpam/stop (2.06s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20317-1181389/.minikube/files/etc/test/nested/copy/1186773/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (49.12s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-547155 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E0127 12:40:33.634517 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/addons-453723/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:40:33.640890 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/addons-453723/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:40:33.652316 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/addons-453723/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:40:33.673723 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/addons-453723/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:40:33.715122 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/addons-453723/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:40:33.796432 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/addons-453723/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:40:33.957949 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/addons-453723/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:40:34.279424 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/addons-453723/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:40:34.921428 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/addons-453723/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:40:36.202792 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/addons-453723/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:40:38.765178 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/addons-453723/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:40:43.886522 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/addons-453723/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-547155 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (49.12268444s)
--- PASS: TestFunctional/serial/StartWithProxy (49.12s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.18s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0127 12:40:49.093068 1186773 config.go:182] Loaded profile config "functional-547155": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-547155 --alsologtostderr -v=8
E0127 12:40:54.128563 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/addons-453723/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-547155 --alsologtostderr -v=8: (6.180356967s)
functional_test.go:663: soft start took 6.182287829s for "functional-547155" cluster.
I0127 12:40:55.273782 1186773 config.go:182] Loaded profile config "functional-547155": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/SoftStart (6.18s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-547155 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-547155 cache add registry.k8s.io/pause:3.1: (1.533489774s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-547155 cache add registry.k8s.io/pause:3.3: (1.383571126s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-547155 cache add registry.k8s.io/pause:latest: (1.242788085s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-547155 /tmp/TestFunctionalserialCacheCmdcacheadd_local730936817/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 cache add minikube-local-cache-test:functional-547155
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 cache delete minikube-local-cache-test:functional-547155
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-547155
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-547155 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (300.728298ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-547155 cache reload: (1.113914166s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 kubectl -- --context functional-547155 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-547155 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.89s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-547155 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0127 12:41:14.610172 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/addons-453723/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-547155 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.890255967s)
functional_test.go:761: restart took 41.890368324s for "functional-547155" cluster.
I0127 12:41:45.785461 1186773 config.go:182] Loaded profile config "functional-547155": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/ExtraConfig (41.89s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-547155 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-547155 logs: (1.681227676s)
--- PASS: TestFunctional/serial/LogsCmd (1.68s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.76s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 logs --file /tmp/TestFunctionalserialLogsFileCmd2265602401/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-547155 logs --file /tmp/TestFunctionalserialLogsFileCmd2265602401/001/logs.txt: (1.760866592s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.76s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.35s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-547155 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-547155
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-547155: exit status 115 (619.564536ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32275 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-547155 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.35s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-547155 config get cpus: exit status 14 (93.729527ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-547155 config get cpus: exit status 14 (84.592832ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (16.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-547155 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-547155 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1222771: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (16.15s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-547155 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-547155 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (195.267531ms)

                                                
                                                
-- stdout --
	* [functional-547155] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20317
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20317-1181389/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-1181389/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 12:42:28.161463 1222475 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:42:28.161593 1222475 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:42:28.161604 1222475 out.go:358] Setting ErrFile to fd 2...
	I0127 12:42:28.161609 1222475 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:42:28.161970 1222475 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-1181389/.minikube/bin
	I0127 12:42:28.162412 1222475 out.go:352] Setting JSON to false
	I0127 12:42:28.163476 1222475 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":19494,"bootTime":1737962255,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0127 12:42:28.163591 1222475 start.go:139] virtualization:  
	I0127 12:42:28.167094 1222475 out.go:177] * [functional-547155] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0127 12:42:28.170745 1222475 out.go:177]   - MINIKUBE_LOCATION=20317
	I0127 12:42:28.170940 1222475 notify.go:220] Checking for updates...
	I0127 12:42:28.176538 1222475 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:42:28.179326 1222475 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20317-1181389/kubeconfig
	I0127 12:42:28.182122 1222475 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-1181389/.minikube
	I0127 12:42:28.184737 1222475 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0127 12:42:28.187455 1222475 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:42:28.190715 1222475 config.go:182] Loaded profile config "functional-547155": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 12:42:28.191286 1222475 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:42:28.217243 1222475 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 12:42:28.217365 1222475 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 12:42:28.273718 1222475 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-27 12:42:28.264009452 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 12:42:28.273831 1222475 docker.go:318] overlay module found
	I0127 12:42:28.276544 1222475 out.go:177] * Using the docker driver based on existing profile
	I0127 12:42:28.279172 1222475 start.go:297] selected driver: docker
	I0127 12:42:28.279196 1222475 start.go:901] validating driver "docker" against &{Name:functional-547155 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-547155 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:42:28.279318 1222475 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:42:28.282589 1222475 out.go:201] 
	W0127 12:42:28.285292 1222475 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0127 12:42:28.287942 1222475 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-547155 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-547155 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-547155 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (259.579498ms)

                                                
                                                
-- stdout --
	* [functional-547155] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20317
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20317-1181389/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-1181389/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 12:42:27.921097 1222371 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:42:27.921251 1222371 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:42:27.921256 1222371 out.go:358] Setting ErrFile to fd 2...
	I0127 12:42:27.921261 1222371 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:42:27.922093 1222371 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-1181389/.minikube/bin
	I0127 12:42:27.922483 1222371 out.go:352] Setting JSON to false
	I0127 12:42:27.923438 1222371 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":19493,"bootTime":1737962255,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0127 12:42:27.923516 1222371 start.go:139] virtualization:  
	I0127 12:42:27.927668 1222371 out.go:177] * [functional-547155] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	I0127 12:42:27.930426 1222371 out.go:177]   - MINIKUBE_LOCATION=20317
	I0127 12:42:27.930453 1222371 notify.go:220] Checking for updates...
	I0127 12:42:27.933005 1222371 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:42:27.935583 1222371 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20317-1181389/kubeconfig
	I0127 12:42:27.938282 1222371 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-1181389/.minikube
	I0127 12:42:27.941055 1222371 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0127 12:42:27.943670 1222371 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:42:27.946705 1222371 config.go:182] Loaded profile config "functional-547155": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 12:42:27.947377 1222371 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:42:27.981756 1222371 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 12:42:27.982024 1222371 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 12:42:28.076910 1222371 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-27 12:42:28.063171064 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 12:42:28.077059 1222371 docker.go:318] overlay module found
	I0127 12:42:28.079819 1222371 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0127 12:42:28.082616 1222371 start.go:297] selected driver: docker
	I0127 12:42:28.082638 1222371 start.go:901] validating driver "docker" against &{Name:functional-547155 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-547155 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:42:28.082749 1222371 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:42:28.085985 1222371 out.go:201] 
	W0127 12:42:28.088663 1222371 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0127 12:42:28.091427 1222371 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-547155 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-547155 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-8449669db6-4knn4" [16a1af73-d5dd-4337-ac33-8cf8c89b401f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-8449669db6-4knn4" [16a1af73-d5dd-4337-ac33-8cf8c89b401f] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.003304124s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32664
functional_test.go:1675: http://192.168.49.2:32664: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-8449669db6-4knn4

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32664
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.65s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [2679de34-f1cf-402d-988d-244e1c2d4ec8] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004559189s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-547155 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-547155 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-547155 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-547155 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7a7ff943-de8c-43dc-82a0-0229a2be6043] Pending
helpers_test.go:344: "sp-pod" [7a7ff943-de8c-43dc-82a0-0229a2be6043] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [7a7ff943-de8c-43dc-82a0-0229a2be6043] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.006427821s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-547155 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-547155 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-547155 delete -f testdata/storage-provisioner/pod.yaml: (1.099974267s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-547155 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [310a1471-c4df-4bce-b0e0-f582e32a7ec9] Pending
helpers_test.go:344: "sp-pod" [310a1471-c4df-4bce-b0e0-f582e32a7ec9] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003318823s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-547155 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.12s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 ssh -n functional-547155 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 cp functional-547155:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd489459482/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 ssh -n functional-547155 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 ssh -n functional-547155 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1186773/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 ssh "sudo cat /etc/test/nested/copy/1186773/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1186773.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 ssh "sudo cat /etc/ssl/certs/1186773.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1186773.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 ssh "sudo cat /usr/share/ca-certificates/1186773.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/11867732.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 ssh "sudo cat /etc/ssl/certs/11867732.pem"
2025/01/27 12:42:44 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/11867732.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 ssh "sudo cat /usr/share/ca-certificates/11867732.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-547155 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-547155 ssh "sudo systemctl is-active docker": exit status 1 (277.8145ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-547155 ssh "sudo systemctl is-active crio": exit status 1 (279.48553ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-547155 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-547155 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-547155 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1219792: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-547155 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-547155 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-547155 apply -f testdata/testsvc.yaml
E0127 12:41:55.571406 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/addons-453723/client.crt: no such file or directory" logger="UnhandledError"
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [db7ddc30-f8cf-4cbe-b876-0f105bd30cec] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [db7ddc30-f8cf-4cbe-b876-0f105bd30cec] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004405655s
I0127 12:42:03.996032 1186773 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-547155 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.88.71 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-547155 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-547155 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-547155 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64fc58db8c-x4pjl" [44cd26d8-92ce-4bb1-bd2f-696d71b55750] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64fc58db8c-x4pjl" [44cd26d8-92ce-4bb1-bd2f-696d71b55750] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003957315s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "358.962277ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "57.11913ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "367.082482ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "58.982071ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-547155 /tmp/TestFunctionalparallelMountCmdany-port1768438248/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1737981743321370112" to /tmp/TestFunctionalparallelMountCmdany-port1768438248/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1737981743321370112" to /tmp/TestFunctionalparallelMountCmdany-port1768438248/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1737981743321370112" to /tmp/TestFunctionalparallelMountCmdany-port1768438248/001/test-1737981743321370112
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-547155 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (330.465969ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 12:42:23.652883 1186773 retry.go:31] will retry after 598.186471ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 27 12:42 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 27 12:42 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 27 12:42 test-1737981743321370112
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 ssh cat /mount-9p/test-1737981743321370112
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-547155 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [aba902a0-66a0-4977-a1f7-2fa1d2c54889] Pending
helpers_test.go:344: "busybox-mount" [aba902a0-66a0-4977-a1f7-2fa1d2c54889] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [aba902a0-66a0-4977-a1f7-2fa1d2c54889] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [aba902a0-66a0-4977-a1f7-2fa1d2c54889] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004140106s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-547155 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-547155 /tmp/TestFunctionalparallelMountCmdany-port1768438248/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 service list -o json
functional_test.go:1494: Took "630.493945ms" to run "out/minikube-linux-arm64 -p functional-547155 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:32581
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:32581
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-547155 /tmp/TestFunctionalparallelMountCmdspecific-port2701395307/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-547155 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (326.091885ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 12:42:31.819529 1186773 retry.go:31] will retry after 377.852624ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-547155 /tmp/TestFunctionalparallelMountCmdspecific-port2701395307/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-547155 ssh "sudo umount -f /mount-9p": exit status 1 (256.063432ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-547155 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-547155 /tmp/TestFunctionalparallelMountCmdspecific-port2701395307/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-547155 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1027224638/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-547155 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1027224638/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-547155 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1027224638/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-547155 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-547155 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1027224638/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-547155 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1027224638/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-547155 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1027224638/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-547155 version -o=json --components: (1.396068852s)
--- PASS: TestFunctional/parallel/Version/components (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-547155 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-547155
docker.io/kindest/kindnetd:v20241108-5c6d2daf
docker.io/kicbase/echo-server:functional-547155
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-547155 image ls --format short --alsologtostderr:
I0127 12:42:45.353277 1225092 out.go:345] Setting OutFile to fd 1 ...
I0127 12:42:45.353402 1225092 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:42:45.353412 1225092 out.go:358] Setting ErrFile to fd 2...
I0127 12:42:45.353418 1225092 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:42:45.353767 1225092 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-1181389/.minikube/bin
I0127 12:42:45.354703 1225092 config.go:182] Loaded profile config "functional-547155": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 12:42:45.354862 1225092 config.go:182] Loaded profile config "functional-547155": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 12:42:45.355529 1225092 cli_runner.go:164] Run: docker container inspect functional-547155 --format={{.State.Status}}
I0127 12:42:45.382407 1225092 ssh_runner.go:195] Run: systemctl --version
I0127 12:42:45.382469 1225092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-547155
I0127 12:42:45.402318 1225092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/20317-1181389/.minikube/machines/functional-547155/id_rsa Username:docker}
I0127 12:42:45.495449 1225092 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-547155 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd                  | v20241108-5c6d2daf | sha256:2be0bc | 35.3MB |
| docker.io/library/nginx                     | alpine             | sha256:f9d642 | 21.6MB |
| docker.io/library/nginx                     | latest             | sha256:781d90 | 68.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/kube-apiserver              | v1.32.1            | sha256:265c2d | 26.2MB |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/library/minikube-local-cache-test | functional-547155  | sha256:9b2852 | 990B   |
| docker.io/kicbase/echo-server               | functional-547155  | sha256:ce2d2c | 2.17MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/kube-controller-manager     | v1.32.1            | sha256:293376 | 24MB   |
| registry.k8s.io/kube-proxy                  | v1.32.1            | sha256:e124fb | 27.4MB |
| registry.k8s.io/kube-scheduler              | v1.32.1            | sha256:ddb38c | 18.9MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:2f6c96 | 16.9MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/etcd                        | 3.5.16-0           | sha256:7fc9d4 | 67.9MB |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-547155 image ls --format table --alsologtostderr:
I0127 12:42:46.514204 1225366 out.go:345] Setting OutFile to fd 1 ...
I0127 12:42:46.514342 1225366 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:42:46.514354 1225366 out.go:358] Setting ErrFile to fd 2...
I0127 12:42:46.514359 1225366 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:42:46.514613 1225366 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-1181389/.minikube/bin
I0127 12:42:46.517067 1225366 config.go:182] Loaded profile config "functional-547155": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 12:42:46.517274 1225366 config.go:182] Loaded profile config "functional-547155": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 12:42:46.518407 1225366 cli_runner.go:164] Run: docker container inspect functional-547155 --format={{.State.Status}}
I0127 12:42:46.539581 1225366 ssh_runner.go:195] Run: systemctl --version
I0127 12:42:46.539637 1225366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-547155
I0127 12:42:46.559219 1225366 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/20317-1181389/.minikube/machines/functional-547155/id_rsa Username:docker}
I0127 12:42:46.653355 1225366 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-547155 image ls --format json --alsologtostderr:
[{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc
"],"size":"1935750"},{"id":"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.1"],"size":"23968433"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:9b285280a447022b758167722744c732daeb51bfa96ed7763f617d8f2f2d5478","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-547155"],"size":"990"},{"id":"sha256:781d902f1e046dcb5aba879a2371b2b6494f97bad89f65a2c7308e78f8087670","repoDigests":["docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a"],"repoTags":["docker
.io/library/nginx:latest"],"size":"68507108"},{"id":"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.1"],"size":"26217748"},{"id":"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0","repoDigests":["registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.1"],"size":"27363416"},{"id":"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c","repoDigests":["registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.1"],"size":"18922457"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f
505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903","repoDigests":["docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"35310383"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-547155"],"size":"2173567"},{"id":"sha256:f9d642c42f7bc79efd0a3aa2b7fe913e0324a23c2f27c6b7f3f112473d47131d","repoDigests":["docker.io/library/nginx@sha256:814a8e88df978ade80e584cc5b333144b9372a8e3c98872d07137dbf3b44d0e4"],"repoTags":["docker.io/library/nginx:alpine"],"size":"21565101"},{"id":"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a
6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"16948420"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82","repoDigests":["registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"67941650"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-547155 image ls --format json --alsologtostderr:
I0127 12:42:46.258533 1225281 out.go:345] Setting OutFile to fd 1 ...
I0127 12:42:46.258736 1225281 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:42:46.258763 1225281 out.go:358] Setting ErrFile to fd 2...
I0127 12:42:46.258781 1225281 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:42:46.259082 1225281 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-1181389/.minikube/bin
I0127 12:42:46.259766 1225281 config.go:182] Loaded profile config "functional-547155": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 12:42:46.259973 1225281 config.go:182] Loaded profile config "functional-547155": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 12:42:46.260525 1225281 cli_runner.go:164] Run: docker container inspect functional-547155 --format={{.State.Status}}
I0127 12:42:46.282984 1225281 ssh_runner.go:195] Run: systemctl --version
I0127 12:42:46.283040 1225281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-547155
I0127 12:42:46.303877 1225281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/20317-1181389/.minikube/machines/functional-547155/id_rsa Username:docker}
I0127 12:42:46.394036 1225281 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-547155 image ls --format yaml --alsologtostderr:
- id: sha256:9b285280a447022b758167722744c732daeb51bfa96ed7763f617d8f2f2d5478
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-547155
size: "990"
- id: sha256:781d902f1e046dcb5aba879a2371b2b6494f97bad89f65a2c7308e78f8087670
repoDigests:
- docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a
repoTags:
- docker.io/library/nginx:latest
size: "68507108"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.1
size: "18922457"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5
repoTags:
- registry.k8s.io/kube-proxy:v1.32.1
size: "27363416"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903
repoDigests:
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "35310383"
- id: sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "16948420"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82
repoDigests:
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "67941650"
- id: sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.1
size: "26217748"
- id: sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.1
size: "23968433"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-547155
size: "2173567"
- id: sha256:f9d642c42f7bc79efd0a3aa2b7fe913e0324a23c2f27c6b7f3f112473d47131d
repoDigests:
- docker.io/library/nginx@sha256:814a8e88df978ade80e584cc5b333144b9372a8e3c98872d07137dbf3b44d0e4
repoTags:
- docker.io/library/nginx:alpine
size: "21565101"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-547155 image ls --format yaml --alsologtostderr:
I0127 12:42:45.606097 1225131 out.go:345] Setting OutFile to fd 1 ...
I0127 12:42:45.606305 1225131 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:42:45.606662 1225131 out.go:358] Setting ErrFile to fd 2...
I0127 12:42:45.606891 1225131 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:42:45.607191 1225131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-1181389/.minikube/bin
I0127 12:42:45.608043 1225131 config.go:182] Loaded profile config "functional-547155": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 12:42:45.608219 1225131 config.go:182] Loaded profile config "functional-547155": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 12:42:45.608773 1225131 cli_runner.go:164] Run: docker container inspect functional-547155 --format={{.State.Status}}
I0127 12:42:45.632612 1225131 ssh_runner.go:195] Run: systemctl --version
I0127 12:42:45.632664 1225131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-547155
I0127 12:42:45.650448 1225131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/20317-1181389/.minikube/machines/functional-547155/id_rsa Username:docker}
I0127 12:42:45.737705 1225131 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-547155 ssh pgrep buildkitd: exit status 1 (301.702665ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 image build -t localhost/my-image:functional-547155 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-547155 image build -t localhost/my-image:functional-547155 testdata/build --alsologtostderr: (3.58514143s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-547155 image build -t localhost/my-image:functional-547155 testdata/build --alsologtostderr:
I0127 12:42:46.154320 1225257 out.go:345] Setting OutFile to fd 1 ...
I0127 12:42:46.159223 1225257 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:42:46.159272 1225257 out.go:358] Setting ErrFile to fd 2...
I0127 12:42:46.159293 1225257 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:42:46.159611 1225257 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-1181389/.minikube/bin
I0127 12:42:46.160356 1225257 config.go:182] Loaded profile config "functional-547155": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 12:42:46.163319 1225257 config.go:182] Loaded profile config "functional-547155": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 12:42:46.163890 1225257 cli_runner.go:164] Run: docker container inspect functional-547155 --format={{.State.Status}}
I0127 12:42:46.197489 1225257 ssh_runner.go:195] Run: systemctl --version
I0127 12:42:46.197551 1225257 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-547155
I0127 12:42:46.242270 1225257 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/20317-1181389/.minikube/machines/functional-547155/id_rsa Username:docker}
I0127 12:42:46.338860 1225257 build_images.go:161] Building image from path: /tmp/build.4018323714.tar
I0127 12:42:46.338932 1225257 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0127 12:42:46.347969 1225257 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4018323714.tar
I0127 12:42:46.351251 1225257 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4018323714.tar: stat -c "%s %y" /var/lib/minikube/build/build.4018323714.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4018323714.tar': No such file or directory
I0127 12:42:46.351275 1225257 ssh_runner.go:362] scp /tmp/build.4018323714.tar --> /var/lib/minikube/build/build.4018323714.tar (3072 bytes)
I0127 12:42:46.376770 1225257 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4018323714
I0127 12:42:46.385780 1225257 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4018323714 -xf /var/lib/minikube/build/build.4018323714.tar
I0127 12:42:46.396561 1225257 containerd.go:394] Building image: /var/lib/minikube/build/build.4018323714
I0127 12:42:46.396633 1225257 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4018323714 --local dockerfile=/var/lib/minikube/build/build.4018323714 --output type=image,name=localhost/my-image:functional-547155
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:83b880e7117f717cf83e5d4bff7eb6474e0878c8425168a1dc60bd46a481ff1d
#8 exporting manifest sha256:83b880e7117f717cf83e5d4bff7eb6474e0878c8425168a1dc60bd46a481ff1d 0.0s done
#8 exporting config sha256:9360b40034d12bb06902117c3fb733f75132383b70dad0f9ccff562b7d26c25d 0.0s done
#8 naming to localhost/my-image:functional-547155 done
#8 DONE 0.2s
I0127 12:42:49.643532 1225257 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4018323714 --local dockerfile=/var/lib/minikube/build/build.4018323714 --output type=image,name=localhost/my-image:functional-547155: (3.246871857s)
I0127 12:42:49.643622 1225257 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4018323714
I0127 12:42:49.653802 1225257 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4018323714.tar
I0127 12:42:49.663208 1225257 build_images.go:217] Built localhost/my-image:functional-547155 from /tmp/build.4018323714.tar
I0127 12:42:49.663242 1225257 build_images.go:133] succeeded building to: functional-547155
I0127 12:42:49.663248 1225257 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.268799718s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-547155
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 image load --daemon kicbase/echo-server:functional-547155 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-547155 image load --daemon kicbase/echo-server:functional-547155 --alsologtostderr: (1.011629533s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 image load --daemon kicbase/echo-server:functional-547155 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-547155 image load --daemon kicbase/echo-server:functional-547155 --alsologtostderr: (1.122208275s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-547155
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 image load --daemon kicbase/echo-server:functional-547155 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-arm64 -p functional-547155 image load --daemon kicbase/echo-server:functional-547155 --alsologtostderr: (1.018073348s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 image save kicbase/echo-server:functional-547155 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 image rm kicbase/echo-server:functional-547155 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-547155
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 image save --daemon kicbase/echo-server:functional-547155 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-547155
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-547155 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-547155
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-547155
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-547155
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (108.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-950481 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0127 12:43:17.493172 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/addons-453723/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-950481 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m47.203908123s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (108.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (32.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-950481 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-950481 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-950481 -- rollout status deployment/busybox: (29.711994622s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-950481 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-950481 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-950481 -- exec busybox-58667487b6-8rqt9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-950481 -- exec busybox-58667487b6-h7j8n -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-950481 -- exec busybox-58667487b6-mcqss -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-950481 -- exec busybox-58667487b6-8rqt9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-950481 -- exec busybox-58667487b6-h7j8n -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-950481 -- exec busybox-58667487b6-mcqss -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-950481 -- exec busybox-58667487b6-8rqt9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-950481 -- exec busybox-58667487b6-h7j8n -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-950481 -- exec busybox-58667487b6-mcqss -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (32.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-950481 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-950481 -- exec busybox-58667487b6-8rqt9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-950481 -- exec busybox-58667487b6-8rqt9 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-950481 -- exec busybox-58667487b6-h7j8n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-950481 -- exec busybox-58667487b6-h7j8n -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-950481 -- exec busybox-58667487b6-mcqss -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-950481 -- exec busybox-58667487b6-mcqss -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (22.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-950481 -v=7 --alsologtostderr
E0127 12:45:33.634834 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/addons-453723/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-950481 -v=7 --alsologtostderr: (21.031937095s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (22.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-950481 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.019010803s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 cp testdata/cp-test.txt ha-950481:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 ssh -n ha-950481 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 cp ha-950481:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3982722998/001/cp-test_ha-950481.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 ssh -n ha-950481 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 cp ha-950481:/home/docker/cp-test.txt ha-950481-m02:/home/docker/cp-test_ha-950481_ha-950481-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 ssh -n ha-950481 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 ssh -n ha-950481-m02 "sudo cat /home/docker/cp-test_ha-950481_ha-950481-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 cp ha-950481:/home/docker/cp-test.txt ha-950481-m03:/home/docker/cp-test_ha-950481_ha-950481-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 ssh -n ha-950481 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 ssh -n ha-950481-m03 "sudo cat /home/docker/cp-test_ha-950481_ha-950481-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 cp ha-950481:/home/docker/cp-test.txt ha-950481-m04:/home/docker/cp-test_ha-950481_ha-950481-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 ssh -n ha-950481 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 ssh -n ha-950481-m04 "sudo cat /home/docker/cp-test_ha-950481_ha-950481-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 cp testdata/cp-test.txt ha-950481-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 ssh -n ha-950481-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 cp ha-950481-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3982722998/001/cp-test_ha-950481-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 ssh -n ha-950481-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 cp ha-950481-m02:/home/docker/cp-test.txt ha-950481:/home/docker/cp-test_ha-950481-m02_ha-950481.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 ssh -n ha-950481-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 ssh -n ha-950481 "sudo cat /home/docker/cp-test_ha-950481-m02_ha-950481.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 cp ha-950481-m02:/home/docker/cp-test.txt ha-950481-m03:/home/docker/cp-test_ha-950481-m02_ha-950481-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 ssh -n ha-950481-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 ssh -n ha-950481-m03 "sudo cat /home/docker/cp-test_ha-950481-m02_ha-950481-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 cp ha-950481-m02:/home/docker/cp-test.txt ha-950481-m04:/home/docker/cp-test_ha-950481-m02_ha-950481-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 ssh -n ha-950481-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 ssh -n ha-950481-m04 "sudo cat /home/docker/cp-test_ha-950481-m02_ha-950481-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 cp testdata/cp-test.txt ha-950481-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 ssh -n ha-950481-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 cp ha-950481-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3982722998/001/cp-test_ha-950481-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 ssh -n ha-950481-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 cp ha-950481-m03:/home/docker/cp-test.txt ha-950481:/home/docker/cp-test_ha-950481-m03_ha-950481.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 ssh -n ha-950481-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 ssh -n ha-950481 "sudo cat /home/docker/cp-test_ha-950481-m03_ha-950481.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 cp ha-950481-m03:/home/docker/cp-test.txt ha-950481-m02:/home/docker/cp-test_ha-950481-m03_ha-950481-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 ssh -n ha-950481-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 ssh -n ha-950481-m02 "sudo cat /home/docker/cp-test_ha-950481-m03_ha-950481-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 cp ha-950481-m03:/home/docker/cp-test.txt ha-950481-m04:/home/docker/cp-test_ha-950481-m03_ha-950481-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 ssh -n ha-950481-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 ssh -n ha-950481-m04 "sudo cat /home/docker/cp-test_ha-950481-m03_ha-950481-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 cp testdata/cp-test.txt ha-950481-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 ssh -n ha-950481-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 cp ha-950481-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3982722998/001/cp-test_ha-950481-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 ssh -n ha-950481-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 cp ha-950481-m04:/home/docker/cp-test.txt ha-950481:/home/docker/cp-test_ha-950481-m04_ha-950481.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 ssh -n ha-950481-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 ssh -n ha-950481 "sudo cat /home/docker/cp-test_ha-950481-m04_ha-950481.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 cp ha-950481-m04:/home/docker/cp-test.txt ha-950481-m02:/home/docker/cp-test_ha-950481-m04_ha-950481-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 ssh -n ha-950481-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 ssh -n ha-950481-m02 "sudo cat /home/docker/cp-test_ha-950481-m04_ha-950481-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 cp ha-950481-m04:/home/docker/cp-test.txt ha-950481-m03:/home/docker/cp-test_ha-950481-m04_ha-950481-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 ssh -n ha-950481-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 ssh -n ha-950481-m03 "sudo cat /home/docker/cp-test_ha-950481-m04_ha-950481-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 node stop m02 -v=7 --alsologtostderr
E0127 12:46:01.335873 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/addons-453723/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-950481 node stop m02 -v=7 --alsologtostderr: (12.071246937s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-950481 status -v=7 --alsologtostderr: exit status 7 (784.114631ms)

                                                
                                                
-- stdout --
	ha-950481
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-950481-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-950481-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-950481-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 12:46:10.196108 1241908 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:46:10.196303 1241908 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:46:10.196333 1241908 out.go:358] Setting ErrFile to fd 2...
	I0127 12:46:10.196356 1241908 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:46:10.196614 1241908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-1181389/.minikube/bin
	I0127 12:46:10.196827 1241908 out.go:352] Setting JSON to false
	I0127 12:46:10.196892 1241908 mustload.go:65] Loading cluster: ha-950481
	I0127 12:46:10.196970 1241908 notify.go:220] Checking for updates...
	I0127 12:46:10.197431 1241908 config.go:182] Loaded profile config "ha-950481": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 12:46:10.197485 1241908 status.go:174] checking status of ha-950481 ...
	I0127 12:46:10.198443 1241908 cli_runner.go:164] Run: docker container inspect ha-950481 --format={{.State.Status}}
	I0127 12:46:10.221711 1241908 status.go:371] ha-950481 host status = "Running" (err=<nil>)
	I0127 12:46:10.221741 1241908 host.go:66] Checking if "ha-950481" exists ...
	I0127 12:46:10.222131 1241908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-950481
	I0127 12:46:10.249236 1241908 host.go:66] Checking if "ha-950481" exists ...
	I0127 12:46:10.249594 1241908 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 12:46:10.249656 1241908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-950481
	I0127 12:46:10.291670 1241908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33952 SSHKeyPath:/home/jenkins/minikube-integration/20317-1181389/.minikube/machines/ha-950481/id_rsa Username:docker}
	I0127 12:46:10.386755 1241908 ssh_runner.go:195] Run: systemctl --version
	I0127 12:46:10.391821 1241908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:46:10.407658 1241908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 12:46:10.474817 1241908 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:73 SystemTime:2025-01-27 12:46:10.464959269 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 12:46:10.475432 1241908 kubeconfig.go:125] found "ha-950481" server: "https://192.168.49.254:8443"
	I0127 12:46:10.475472 1241908 api_server.go:166] Checking apiserver status ...
	I0127 12:46:10.475527 1241908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:46:10.487236 1241908 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1508/cgroup
	I0127 12:46:10.497183 1241908 api_server.go:182] apiserver freezer: "3:freezer:/docker/34f76d3513afe752ed0cf18d23c2272154eb51ad26475d40d97deb901053e1ae/kubepods/burstable/pod9d4bc15f6a7493a7db6e0cfd3aba2850/b7a930c9a9594a68a4386e4a4845ccd0913f47f341291086924e9e1a22f1c8c0"
	I0127 12:46:10.497264 1241908 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/34f76d3513afe752ed0cf18d23c2272154eb51ad26475d40d97deb901053e1ae/kubepods/burstable/pod9d4bc15f6a7493a7db6e0cfd3aba2850/b7a930c9a9594a68a4386e4a4845ccd0913f47f341291086924e9e1a22f1c8c0/freezer.state
	I0127 12:46:10.506717 1241908 api_server.go:204] freezer state: "THAWED"
	I0127 12:46:10.506760 1241908 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0127 12:46:10.515781 1241908 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0127 12:46:10.515822 1241908 status.go:463] ha-950481 apiserver status = Running (err=<nil>)
	I0127 12:46:10.515839 1241908 status.go:176] ha-950481 status: &{Name:ha-950481 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 12:46:10.515863 1241908 status.go:174] checking status of ha-950481-m02 ...
	I0127 12:46:10.516230 1241908 cli_runner.go:164] Run: docker container inspect ha-950481-m02 --format={{.State.Status}}
	I0127 12:46:10.535225 1241908 status.go:371] ha-950481-m02 host status = "Stopped" (err=<nil>)
	I0127 12:46:10.535253 1241908 status.go:384] host is not running, skipping remaining checks
	I0127 12:46:10.535260 1241908 status.go:176] ha-950481-m02 status: &{Name:ha-950481-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 12:46:10.535280 1241908 status.go:174] checking status of ha-950481-m03 ...
	I0127 12:46:10.535595 1241908 cli_runner.go:164] Run: docker container inspect ha-950481-m03 --format={{.State.Status}}
	I0127 12:46:10.553996 1241908 status.go:371] ha-950481-m03 host status = "Running" (err=<nil>)
	I0127 12:46:10.554025 1241908 host.go:66] Checking if "ha-950481-m03" exists ...
	I0127 12:46:10.554342 1241908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-950481-m03
	I0127 12:46:10.573742 1241908 host.go:66] Checking if "ha-950481-m03" exists ...
	I0127 12:46:10.574081 1241908 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 12:46:10.574135 1241908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-950481-m03
	I0127 12:46:10.598029 1241908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33962 SSHKeyPath:/home/jenkins/minikube-integration/20317-1181389/.minikube/machines/ha-950481-m03/id_rsa Username:docker}
	I0127 12:46:10.686790 1241908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:46:10.699435 1241908 kubeconfig.go:125] found "ha-950481" server: "https://192.168.49.254:8443"
	I0127 12:46:10.699465 1241908 api_server.go:166] Checking apiserver status ...
	I0127 12:46:10.699512 1241908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:46:10.710770 1241908 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1431/cgroup
	I0127 12:46:10.721218 1241908 api_server.go:182] apiserver freezer: "3:freezer:/docker/9a05443e95b9b9ec0c1b40ba5404563314d4432fee0122d5115f880f7aa2db0d/kubepods/burstable/pod22e0866ac8437ae6129eb264fb34e035/c2f4b9371f4926e8e37c848c6847b9ccb2455035bfe9abf2ebf73b9e8fc0aaa4"
	I0127 12:46:10.721297 1241908 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9a05443e95b9b9ec0c1b40ba5404563314d4432fee0122d5115f880f7aa2db0d/kubepods/burstable/pod22e0866ac8437ae6129eb264fb34e035/c2f4b9371f4926e8e37c848c6847b9ccb2455035bfe9abf2ebf73b9e8fc0aaa4/freezer.state
	I0127 12:46:10.730329 1241908 api_server.go:204] freezer state: "THAWED"
	I0127 12:46:10.730360 1241908 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0127 12:46:10.738649 1241908 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0127 12:46:10.738680 1241908 status.go:463] ha-950481-m03 apiserver status = Running (err=<nil>)
	I0127 12:46:10.738689 1241908 status.go:176] ha-950481-m03 status: &{Name:ha-950481-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 12:46:10.738706 1241908 status.go:174] checking status of ha-950481-m04 ...
	I0127 12:46:10.739041 1241908 cli_runner.go:164] Run: docker container inspect ha-950481-m04 --format={{.State.Status}}
	I0127 12:46:10.756917 1241908 status.go:371] ha-950481-m04 host status = "Running" (err=<nil>)
	I0127 12:46:10.756966 1241908 host.go:66] Checking if "ha-950481-m04" exists ...
	I0127 12:46:10.757361 1241908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-950481-m04
	I0127 12:46:10.775175 1241908 host.go:66] Checking if "ha-950481-m04" exists ...
	I0127 12:46:10.775487 1241908 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 12:46:10.775532 1241908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-950481-m04
	I0127 12:46:10.797529 1241908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33967 SSHKeyPath:/home/jenkins/minikube-integration/20317-1181389/.minikube/machines/ha-950481-m04/id_rsa Username:docker}
	I0127 12:46:10.890692 1241908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:46:10.903420 1241908 status.go:176] ha-950481-m04 status: &{Name:ha-950481-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (19.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-950481 node start m02 -v=7 --alsologtostderr: (18.148874328s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-950481 status -v=7 --alsologtostderr: (1.060576914s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (19.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (132.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-950481 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-950481 -v=7 --alsologtostderr
E0127 12:46:55.562272 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/functional-547155/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:46:55.568624 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/functional-547155/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:46:55.579974 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/functional-547155/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:46:55.601298 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/functional-547155/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:46:55.642534 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/functional-547155/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:46:55.723806 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/functional-547155/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:46:55.885265 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/functional-547155/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:46:56.206570 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/functional-547155/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:46:56.848564 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/functional-547155/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:46:58.129896 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/functional-547155/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:47:00.691359 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/functional-547155/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:47:05.813119 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/functional-547155/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-950481 -v=7 --alsologtostderr: (37.1446879s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-950481 --wait=true -v=7 --alsologtostderr
E0127 12:47:16.054815 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/functional-547155/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:47:36.536766 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/functional-547155/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:48:17.499027 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/functional-547155/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-950481 --wait=true -v=7 --alsologtostderr: (1m34.726273704s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-950481
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (132.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-950481 node delete m03 -v=7 --alsologtostderr: (9.594912215s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-950481 stop -v=7 --alsologtostderr: (35.783567365s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-950481 status -v=7 --alsologtostderr: exit status 7 (125.370042ms)

                                                
                                                
-- stdout --
	ha-950481
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-950481-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-950481-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 12:49:31.152781 1256318 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:49:31.152929 1256318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:49:31.152943 1256318 out.go:358] Setting ErrFile to fd 2...
	I0127 12:49:31.152948 1256318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:49:31.153281 1256318 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-1181389/.minikube/bin
	I0127 12:49:31.153498 1256318 out.go:352] Setting JSON to false
	I0127 12:49:31.153542 1256318 mustload.go:65] Loading cluster: ha-950481
	I0127 12:49:31.153619 1256318 notify.go:220] Checking for updates...
	I0127 12:49:31.155167 1256318 config.go:182] Loaded profile config "ha-950481": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 12:49:31.155207 1256318 status.go:174] checking status of ha-950481 ...
	I0127 12:49:31.156013 1256318 cli_runner.go:164] Run: docker container inspect ha-950481 --format={{.State.Status}}
	I0127 12:49:31.175246 1256318 status.go:371] ha-950481 host status = "Stopped" (err=<nil>)
	I0127 12:49:31.175273 1256318 status.go:384] host is not running, skipping remaining checks
	I0127 12:49:31.175280 1256318 status.go:176] ha-950481 status: &{Name:ha-950481 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 12:49:31.175306 1256318 status.go:174] checking status of ha-950481-m02 ...
	I0127 12:49:31.175639 1256318 cli_runner.go:164] Run: docker container inspect ha-950481-m02 --format={{.State.Status}}
	I0127 12:49:31.199314 1256318 status.go:371] ha-950481-m02 host status = "Stopped" (err=<nil>)
	I0127 12:49:31.199340 1256318 status.go:384] host is not running, skipping remaining checks
	I0127 12:49:31.199347 1256318 status.go:176] ha-950481-m02 status: &{Name:ha-950481-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 12:49:31.199370 1256318 status.go:174] checking status of ha-950481-m04 ...
	I0127 12:49:31.199664 1256318 cli_runner.go:164] Run: docker container inspect ha-950481-m04 --format={{.State.Status}}
	I0127 12:49:31.219374 1256318 status.go:371] ha-950481-m04 host status = "Stopped" (err=<nil>)
	I0127 12:49:31.219396 1256318 status.go:384] host is not running, skipping remaining checks
	I0127 12:49:31.219403 1256318 status.go:176] ha-950481-m04 status: &{Name:ha-950481-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (71.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-950481 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0127 12:49:39.420485 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/functional-547155/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:50:33.634273 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/addons-453723/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-950481 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m10.26596825s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (71.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (44.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-950481 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-950481 --control-plane -v=7 --alsologtostderr: (43.290229547s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-950481 status -v=7 --alsologtostderr
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-950481 status -v=7 --alsologtostderr: (1.016026361s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (44.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.341854423s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.34s)

                                                
                                    
x
+
TestJSONOutput/start/Command (78.02s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-533600 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0127 12:51:55.562207 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/functional-547155/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:52:23.267083 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/functional-547155/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-533600 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m18.011321232s)
--- PASS: TestJSONOutput/start/Command (78.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-533600 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-533600 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.73s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-533600 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-533600 --output=json --user=testUser: (5.734020424s)
--- PASS: TestJSONOutput/stop/Command (5.73s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-150729 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-150729 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (89.062315ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"32382354-2f0d-417c-9357-b6c10ea0032a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-150729] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e37077e8-8384-4cc1-a462-22b3c0941bd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20317"}}
	{"specversion":"1.0","id":"ace33776-a3b7-45de-bca8-bf98f664e46a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"596c7b2a-a9a6-4b02-a32b-f362a49cb33d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20317-1181389/kubeconfig"}}
	{"specversion":"1.0","id":"8be6c453-64cc-48fd-8db0-17a7cf82c68b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-1181389/.minikube"}}
	{"specversion":"1.0","id":"786640f7-a094-45df-8148-804f4ca7b0da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"bff774e4-7a86-4310-a79a-f9719c601778","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d2e8115b-7577-43c7-99ae-3f7cd19a217b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-150729" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-150729
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.29s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-786171 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-786171 --network=: (38.128628774s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-786171" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-786171
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-786171: (2.137541025s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.29s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.43s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-791502 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-791502 --network=bridge: (31.406522694s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-791502" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-791502
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-791502: (1.997050955s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.43s)

                                                
                                    
x
+
TestKicExistingNetwork (33.32s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0127 12:54:20.563395 1186773 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0127 12:54:20.580028 1186773 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0127 12:54:20.580129 1186773 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0127 12:54:20.580151 1186773 cli_runner.go:164] Run: docker network inspect existing-network
W0127 12:54:20.596740 1186773 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0127 12:54:20.596775 1186773 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0127 12:54:20.596793 1186773 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0127 12:54:20.596989 1186773 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0127 12:54:20.615251 1186773 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0f9fe3033877 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:e9:d1:42:e8} reservation:<nil>}
I0127 12:54:20.619966 1186773 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I0127 12:54:20.620409 1186773 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a59800}
I0127 12:54:20.621116 1186773 network_create.go:124] attempt to create docker network existing-network 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
I0127 12:54:20.621630 1186773 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0127 12:54:20.691894 1186773 network_create.go:108] docker network existing-network 192.168.67.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-917962 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-917962 --network=existing-network: (31.144011212s)
helpers_test.go:175: Cleaning up "existing-network-917962" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-917962
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-917962: (2.016930829s)
I0127 12:54:53.869598 1186773 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (33.32s)

                                                
                                    
x
+
TestKicCustomSubnet (31.79s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-677391 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-677391 --subnet=192.168.60.0/24: (29.629477617s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-677391 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-677391" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-677391
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-677391: (2.141023302s)
--- PASS: TestKicCustomSubnet (31.79s)

                                                
                                    
x
+
TestKicStaticIP (33.26s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-166317 --static-ip=192.168.200.200
E0127 12:55:33.637176 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/addons-453723/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-166317 --static-ip=192.168.200.200: (30.983367714s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-166317 ip
helpers_test.go:175: Cleaning up "static-ip-166317" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-166317
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-166317: (2.126009154s)
--- PASS: TestKicStaticIP (33.26s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (69.28s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-848665 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-848665 --driver=docker  --container-runtime=containerd: (33.594538632s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-851241 --driver=docker  --container-runtime=containerd
E0127 12:56:55.561982 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/functional-547155/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:56:56.697375 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/addons-453723/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-851241 --driver=docker  --container-runtime=containerd: (30.18795773s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-848665
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-851241
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-851241" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-851241
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-851241: (2.015570225s)
helpers_test.go:175: Cleaning up "first-848665" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-848665
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-848665: (1.991211138s)
--- PASS: TestMinikubeProfile (69.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.39s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-519861 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-519861 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.390080112s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-519861 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.47s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-522109 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-522109 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.472049004s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-522109 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-519861 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-519861 --alsologtostderr -v=5: (1.632958594s)
--- PASS: TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-522109 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-522109
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-522109: (1.201565312s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.76s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-522109
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-522109: (6.764222917s)
--- PASS: TestMountStart/serial/RestartStopped (7.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-522109 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (92.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-303096 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-303096 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m32.22419417s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (92.75s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (15.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-303096 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-303096 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-303096 -- rollout status deployment/busybox: (13.921201084s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-303096 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-303096 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-303096 -- exec busybox-58667487b6-585zv -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-303096 -- exec busybox-58667487b6-hssr8 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-303096 -- exec busybox-58667487b6-585zv -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-303096 -- exec busybox-58667487b6-hssr8 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-303096 -- exec busybox-58667487b6-585zv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-303096 -- exec busybox-58667487b6-hssr8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (15.81s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-303096 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-303096 -- exec busybox-58667487b6-585zv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-303096 -- exec busybox-58667487b6-585zv -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-303096 -- exec busybox-58667487b6-hssr8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-303096 -- exec busybox-58667487b6-hssr8 -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-303096 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-303096 -v 3 --alsologtostderr: (17.998301917s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.65s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-303096 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.74s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 cp testdata/cp-test.txt multinode-303096:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 ssh -n multinode-303096 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 cp multinode-303096:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2286843705/001/cp-test_multinode-303096.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 ssh -n multinode-303096 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 cp multinode-303096:/home/docker/cp-test.txt multinode-303096-m02:/home/docker/cp-test_multinode-303096_multinode-303096-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 ssh -n multinode-303096 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 ssh -n multinode-303096-m02 "sudo cat /home/docker/cp-test_multinode-303096_multinode-303096-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 cp multinode-303096:/home/docker/cp-test.txt multinode-303096-m03:/home/docker/cp-test_multinode-303096_multinode-303096-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 ssh -n multinode-303096 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 ssh -n multinode-303096-m03 "sudo cat /home/docker/cp-test_multinode-303096_multinode-303096-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 cp testdata/cp-test.txt multinode-303096-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 ssh -n multinode-303096-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 cp multinode-303096-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2286843705/001/cp-test_multinode-303096-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 ssh -n multinode-303096-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 cp multinode-303096-m02:/home/docker/cp-test.txt multinode-303096:/home/docker/cp-test_multinode-303096-m02_multinode-303096.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 ssh -n multinode-303096-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 ssh -n multinode-303096 "sudo cat /home/docker/cp-test_multinode-303096-m02_multinode-303096.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 cp multinode-303096-m02:/home/docker/cp-test.txt multinode-303096-m03:/home/docker/cp-test_multinode-303096-m02_multinode-303096-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 ssh -n multinode-303096-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 ssh -n multinode-303096-m03 "sudo cat /home/docker/cp-test_multinode-303096-m02_multinode-303096-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 cp testdata/cp-test.txt multinode-303096-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 ssh -n multinode-303096-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 cp multinode-303096-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2286843705/001/cp-test_multinode-303096-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 ssh -n multinode-303096-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 cp multinode-303096-m03:/home/docker/cp-test.txt multinode-303096:/home/docker/cp-test_multinode-303096-m03_multinode-303096.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 ssh -n multinode-303096-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 ssh -n multinode-303096 "sudo cat /home/docker/cp-test_multinode-303096-m03_multinode-303096.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 cp multinode-303096-m03:/home/docker/cp-test.txt multinode-303096-m02:/home/docker/cp-test_multinode-303096-m03_multinode-303096-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 ssh -n multinode-303096-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 ssh -n multinode-303096-m02 "sudo cat /home/docker/cp-test_multinode-303096-m03_multinode-303096-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.98s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-303096 node stop m03: (1.217040759s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-303096 status: exit status 7 (529.388582ms)

                                                
                                                
-- stdout --
	multinode-303096
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-303096-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-303096-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-303096 status --alsologtostderr: exit status 7 (502.261325ms)

                                                
                                                
-- stdout --
	multinode-303096
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-303096-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-303096-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 12:59:57.447815 1310621 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:59:57.447963 1310621 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:59:57.447977 1310621 out.go:358] Setting ErrFile to fd 2...
	I0127 12:59:57.447984 1310621 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:59:57.448378 1310621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-1181389/.minikube/bin
	I0127 12:59:57.449146 1310621 out.go:352] Setting JSON to false
	I0127 12:59:57.449231 1310621 mustload.go:65] Loading cluster: multinode-303096
	I0127 12:59:57.449490 1310621 notify.go:220] Checking for updates...
	I0127 12:59:57.449755 1310621 config.go:182] Loaded profile config "multinode-303096": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 12:59:57.449794 1310621 status.go:174] checking status of multinode-303096 ...
	I0127 12:59:57.450656 1310621 cli_runner.go:164] Run: docker container inspect multinode-303096 --format={{.State.Status}}
	I0127 12:59:57.470216 1310621 status.go:371] multinode-303096 host status = "Running" (err=<nil>)
	I0127 12:59:57.470242 1310621 host.go:66] Checking if "multinode-303096" exists ...
	I0127 12:59:57.470565 1310621 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-303096
	I0127 12:59:57.496921 1310621 host.go:66] Checking if "multinode-303096" exists ...
	I0127 12:59:57.497350 1310621 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 12:59:57.497409 1310621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-303096
	I0127 12:59:57.516314 1310621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34072 SSHKeyPath:/home/jenkins/minikube-integration/20317-1181389/.minikube/machines/multinode-303096/id_rsa Username:docker}
	I0127 12:59:57.606054 1310621 ssh_runner.go:195] Run: systemctl --version
	I0127 12:59:57.610448 1310621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:59:57.622034 1310621 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 12:59:57.675105 1310621 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:63 SystemTime:2025-01-27 12:59:57.665302076 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 12:59:57.675682 1310621 kubeconfig.go:125] found "multinode-303096" server: "https://192.168.58.2:8443"
	I0127 12:59:57.675723 1310621 api_server.go:166] Checking apiserver status ...
	I0127 12:59:57.675766 1310621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:59:57.686496 1310621 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1464/cgroup
	I0127 12:59:57.695700 1310621 api_server.go:182] apiserver freezer: "3:freezer:/docker/15e330dcd0514dd4ca48c40597fa2d228f2cd0d42f930ab7bf366d5f6da2d44f/kubepods/burstable/pod1301e68c4bafcd753d3549e72ab1fdff/2dc32eb541ddf7f1977b45fa7e3e29b4aecb294380170af780dc891083a97050"
	I0127 12:59:57.695781 1310621 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/15e330dcd0514dd4ca48c40597fa2d228f2cd0d42f930ab7bf366d5f6da2d44f/kubepods/burstable/pod1301e68c4bafcd753d3549e72ab1fdff/2dc32eb541ddf7f1977b45fa7e3e29b4aecb294380170af780dc891083a97050/freezer.state
	I0127 12:59:57.704614 1310621 api_server.go:204] freezer state: "THAWED"
	I0127 12:59:57.704645 1310621 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0127 12:59:57.712751 1310621 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0127 12:59:57.712794 1310621 status.go:463] multinode-303096 apiserver status = Running (err=<nil>)
	I0127 12:59:57.712806 1310621 status.go:176] multinode-303096 status: &{Name:multinode-303096 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 12:59:57.712829 1310621 status.go:174] checking status of multinode-303096-m02 ...
	I0127 12:59:57.713187 1310621 cli_runner.go:164] Run: docker container inspect multinode-303096-m02 --format={{.State.Status}}
	I0127 12:59:57.730812 1310621 status.go:371] multinode-303096-m02 host status = "Running" (err=<nil>)
	I0127 12:59:57.730838 1310621 host.go:66] Checking if "multinode-303096-m02" exists ...
	I0127 12:59:57.731137 1310621 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-303096-m02
	I0127 12:59:57.747643 1310621 host.go:66] Checking if "multinode-303096-m02" exists ...
	I0127 12:59:57.747960 1310621 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 12:59:57.748014 1310621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-303096-m02
	I0127 12:59:57.765141 1310621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34077 SSHKeyPath:/home/jenkins/minikube-integration/20317-1181389/.minikube/machines/multinode-303096-m02/id_rsa Username:docker}
	I0127 12:59:57.849823 1310621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:59:57.861011 1310621 status.go:176] multinode-303096-m02 status: &{Name:multinode-303096-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0127 12:59:57.861086 1310621 status.go:174] checking status of multinode-303096-m03 ...
	I0127 12:59:57.861448 1310621 cli_runner.go:164] Run: docker container inspect multinode-303096-m03 --format={{.State.Status}}
	I0127 12:59:57.878277 1310621 status.go:371] multinode-303096-m03 host status = "Stopped" (err=<nil>)
	I0127 12:59:57.878300 1310621 status.go:384] host is not running, skipping remaining checks
	I0127 12:59:57.878307 1310621 status.go:176] multinode-303096-m03 status: &{Name:multinode-303096-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-303096 node start m03 -v=7 --alsologtostderr: (8.826425645s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.65s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (86.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-303096
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-303096
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-303096: (24.808867718s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-303096 --wait=true -v=8 --alsologtostderr
E0127 13:00:33.634267 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/addons-453723/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-303096 --wait=true -v=8 --alsologtostderr: (1m1.749782043s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-303096
--- PASS: TestMultiNode/serial/RestartKeepsNodes (86.69s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-303096 node delete m03: (4.636477189s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.35s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 stop
E0127 13:01:55.562412 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/functional-547155/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-303096 stop: (23.662322465s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-303096 status: exit status 7 (90.676557ms)

                                                
                                                
-- stdout --
	multinode-303096
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-303096-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-303096 status --alsologtostderr: exit status 7 (100.472151ms)

                                                
                                                
-- stdout --
	multinode-303096
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-303096-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 13:02:03.374113 1318652 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:02:03.374225 1318652 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:02:03.374236 1318652 out.go:358] Setting ErrFile to fd 2...
	I0127 13:02:03.374241 1318652 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:02:03.374495 1318652 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-1181389/.minikube/bin
	I0127 13:02:03.374671 1318652 out.go:352] Setting JSON to false
	I0127 13:02:03.374709 1318652 mustload.go:65] Loading cluster: multinode-303096
	I0127 13:02:03.374810 1318652 notify.go:220] Checking for updates...
	I0127 13:02:03.375132 1318652 config.go:182] Loaded profile config "multinode-303096": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 13:02:03.375149 1318652 status.go:174] checking status of multinode-303096 ...
	I0127 13:02:03.375977 1318652 cli_runner.go:164] Run: docker container inspect multinode-303096 --format={{.State.Status}}
	I0127 13:02:03.395282 1318652 status.go:371] multinode-303096 host status = "Stopped" (err=<nil>)
	I0127 13:02:03.395308 1318652 status.go:384] host is not running, skipping remaining checks
	I0127 13:02:03.395316 1318652 status.go:176] multinode-303096 status: &{Name:multinode-303096 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 13:02:03.395350 1318652 status.go:174] checking status of multinode-303096-m02 ...
	I0127 13:02:03.395703 1318652 cli_runner.go:164] Run: docker container inspect multinode-303096-m02 --format={{.State.Status}}
	I0127 13:02:03.424037 1318652 status.go:371] multinode-303096-m02 host status = "Stopped" (err=<nil>)
	I0127 13:02:03.424058 1318652 status.go:384] host is not running, skipping remaining checks
	I0127 13:02:03.424064 1318652 status.go:176] multinode-303096-m02 status: &{Name:multinode-303096-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.85s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (56.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-303096 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-303096 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (55.644613571s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-303096 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (56.31s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-303096
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-303096-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-303096-m02 --driver=docker  --container-runtime=containerd: exit status 14 (98.536782ms)

                                                
                                                
-- stdout --
	* [multinode-303096-m02] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20317
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20317-1181389/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-1181389/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-303096-m02' is duplicated with machine name 'multinode-303096-m02' in profile 'multinode-303096'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-303096-m03 --driver=docker  --container-runtime=containerd
E0127 13:03:18.629181 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/functional-547155/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-303096-m03 --driver=docker  --container-runtime=containerd: (34.005617172s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-303096
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-303096: exit status 80 (338.13959ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-303096 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-303096-m03 already exists in multinode-303096-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-303096-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-303096-m03: (1.96214005s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.47s)

                                                
                                    
x
+
TestPreload (120.88s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-437224 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-437224 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m24.901078427s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-437224 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-437224 image pull gcr.io/k8s-minikube/busybox: (1.990512727s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-437224
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-437224: (11.994022355s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-437224 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E0127 13:05:33.634905 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/addons-453723/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-437224 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (19.265853722s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-437224 image list
helpers_test.go:175: Cleaning up "test-preload-437224" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-437224
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-437224: (2.42570148s)
--- PASS: TestPreload (120.88s)

                                                
                                    
x
+
TestScheduledStopUnix (107.86s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-434679 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-434679 --memory=2048 --driver=docker  --container-runtime=containerd: (31.553567879s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-434679 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-434679 -n scheduled-stop-434679
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-434679 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0127 13:06:13.095173 1186773 retry.go:31] will retry after 60.396µs: open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/scheduled-stop-434679/pid: no such file or directory
I0127 13:06:13.096299 1186773 retry.go:31] will retry after 184.058µs: open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/scheduled-stop-434679/pid: no such file or directory
I0127 13:06:13.097427 1186773 retry.go:31] will retry after 113.466µs: open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/scheduled-stop-434679/pid: no such file or directory
I0127 13:06:13.098517 1186773 retry.go:31] will retry after 190.404µs: open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/scheduled-stop-434679/pid: no such file or directory
I0127 13:06:13.099598 1186773 retry.go:31] will retry after 534.127µs: open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/scheduled-stop-434679/pid: no such file or directory
I0127 13:06:13.100864 1186773 retry.go:31] will retry after 831.533µs: open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/scheduled-stop-434679/pid: no such file or directory
I0127 13:06:13.102264 1186773 retry.go:31] will retry after 1.201823ms: open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/scheduled-stop-434679/pid: no such file or directory
I0127 13:06:13.104455 1186773 retry.go:31] will retry after 1.663836ms: open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/scheduled-stop-434679/pid: no such file or directory
I0127 13:06:13.106629 1186773 retry.go:31] will retry after 2.040772ms: open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/scheduled-stop-434679/pid: no such file or directory
I0127 13:06:13.108805 1186773 retry.go:31] will retry after 4.02367ms: open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/scheduled-stop-434679/pid: no such file or directory
I0127 13:06:13.112972 1186773 retry.go:31] will retry after 2.919231ms: open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/scheduled-stop-434679/pid: no such file or directory
I0127 13:06:13.116250 1186773 retry.go:31] will retry after 8.737428ms: open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/scheduled-stop-434679/pid: no such file or directory
I0127 13:06:13.125515 1186773 retry.go:31] will retry after 12.510003ms: open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/scheduled-stop-434679/pid: no such file or directory
I0127 13:06:13.141232 1186773 retry.go:31] will retry after 15.439137ms: open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/scheduled-stop-434679/pid: no such file or directory
I0127 13:06:13.157449 1186773 retry.go:31] will retry after 35.534556ms: open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/scheduled-stop-434679/pid: no such file or directory
I0127 13:06:13.193669 1186773 retry.go:31] will retry after 29.55138ms: open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/scheduled-stop-434679/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-434679 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-434679 -n scheduled-stop-434679
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-434679
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-434679 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0127 13:06:55.562284 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/functional-547155/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-434679
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-434679: exit status 7 (70.164593ms)

                                                
                                                
-- stdout --
	scheduled-stop-434679
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-434679 -n scheduled-stop-434679
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-434679 -n scheduled-stop-434679: exit status 7 (74.134313ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-434679" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-434679
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-434679: (4.711811757s)
--- PASS: TestScheduledStopUnix (107.86s)

                                                
                                    
x
+
TestInsufficientStorage (10.91s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-221441 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-221441 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.4593853s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1036e4fc-43e0-4a35-90bc-05d3cd8a433b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-221441] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"47731fcd-fdc2-4daa-95d2-d03f11837119","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20317"}}
	{"specversion":"1.0","id":"32a9f320-4a17-46f9-906b-799a55335771","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0b83dc91-469b-4321-9e8b-4d9aa8f08813","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20317-1181389/kubeconfig"}}
	{"specversion":"1.0","id":"0541399e-18f4-4518-97ce-5c175608a474","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-1181389/.minikube"}}
	{"specversion":"1.0","id":"8110143e-b3a9-429f-a252-1fd9fe689360","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"157cc320-0d98-4bca-afa9-356cb6acf881","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f90ff3e4-88d2-4409-810a-43b6fb88f787","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"fbfc648d-eef2-473f-b7aa-266451650419","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"cb92019e-1c92-4e37-8e27-411c61e44352","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e1f51952-b6a4-4938-8223-9abf6838c36c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"b79b9dbd-4433-4531-a966-65d187c81a9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-221441\" primary control-plane node in \"insufficient-storage-221441\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"55030583-615e-4a6f-a934-4df91a14a1eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.46 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"cf4a8360-33dc-4247-bb27-792e4e06af1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"6e0a44b1-f4c2-433d-ac38-9e621b726b04","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-221441 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-221441 --output=json --layout=cluster: exit status 7 (277.742969ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-221441","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-221441","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 13:07:37.594769 1337566 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-221441" does not appear in /home/jenkins/minikube-integration/20317-1181389/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-221441 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-221441 --output=json --layout=cluster: exit status 7 (276.451903ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-221441","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-221441","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 13:07:37.871185 1337626 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-221441" does not appear in /home/jenkins/minikube-integration/20317-1181389/kubeconfig
	E0127 13:07:37.881480 1337626 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/insufficient-storage-221441/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-221441" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-221441
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-221441: (1.89984973s)
--- PASS: TestInsufficientStorage (10.91s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (94.5s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1613178327 start -p running-upgrade-013454 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1613178327 start -p running-upgrade-013454 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (50.957493216s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-013454 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-013454 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (39.098149229s)
helpers_test.go:175: Cleaning up "running-upgrade-013454" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-013454
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-013454: (2.966772032s)
--- PASS: TestRunningBinaryUpgrade (94.50s)

                                                
                                    
x
+
TestKubernetesUpgrade (97.91s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-480654 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-480654 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (58.203807434s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-480654
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-480654: (1.295559086s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-480654 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-480654 status --format={{.Host}}: exit status 7 (94.106989ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-480654 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0127 13:10:33.635431 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/addons-453723/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-480654 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (29.876105792s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-480654 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-480654 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-480654 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (101.398686ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-480654] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20317
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20317-1181389/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-1181389/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-480654
	    minikube start -p kubernetes-upgrade-480654 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4806542 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.1, by running:
	    
	    minikube start -p kubernetes-upgrade-480654 --kubernetes-version=v1.32.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-480654 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-480654 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (5.944519101s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-480654" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-480654
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-480654: (2.284115054s)
--- PASS: TestKubernetesUpgrade (97.91s)

                                                
                                    
x
+
TestMissingContainerUpgrade (181.94s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.996079718 start -p missing-upgrade-704347 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.996079718 start -p missing-upgrade-704347 --memory=2200 --driver=docker  --container-runtime=containerd: (1m42.995492016s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-704347
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-704347: (10.367993315s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-704347
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-704347 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-704347 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m5.550474291s)
helpers_test.go:175: Cleaning up "missing-upgrade-704347" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-704347
E0127 13:11:55.561408 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/functional-547155/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-704347: (2.308268135s)
--- PASS: TestMissingContainerUpgrade (181.94s)

                                                
                                    
x
+
TestPause/serial/Start (83.91s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-587464 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-587464 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m23.910177806s)
--- PASS: TestPause/serial/Start (83.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-184781 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-184781 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (112.223846ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-184781] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20317
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20317-1181389/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-1181389/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-184781 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-184781 --driver=docker  --container-runtime=containerd: (42.222526129s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-184781 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-184781 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-184781 --no-kubernetes --driver=docker  --container-runtime=containerd: (14.371905361s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-184781 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-184781 status -o json: exit status 2 (305.926767ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-184781","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-184781
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-184781: (1.959819716s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-184781 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-184781 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.237170233s)
--- PASS: TestNoKubernetes/serial/Start (5.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-184781 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-184781 "sudo systemctl is-active --quiet service kubelet": exit status 1 (310.6202ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-184781
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-184781: (1.220179626s)
--- PASS: TestNoKubernetes/serial/Stop (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-184781 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-184781 --driver=docker  --container-runtime=containerd: (6.686139995s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-184781 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-184781 "sudo systemctl is-active --quiet service kubelet": exit status 1 (280.827755ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.02s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-587464 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-587464 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (5.999722417s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.02s)

                                                
                                    
x
+
TestPause/serial/Pause (0.95s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-587464 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.95s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.52s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-587464 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-587464 --output=json --layout=cluster: exit status 2 (520.024407ms)

                                                
                                                
-- stdout --
	{"Name":"pause-587464","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-587464","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.52s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.93s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-587464 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.93s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.14s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-587464 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-587464 --alsologtostderr -v=5: (1.140921703s)
--- PASS: TestPause/serial/PauseAgain (1.14s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.73s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-587464 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-587464 --alsologtostderr -v=5: (3.724930876s)
--- PASS: TestPause/serial/DeletePaused (3.73s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.17s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-587464
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-587464: exit status 1 (19.20936ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-587464: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.61s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (111.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2464712668 start -p stopped-upgrade-653869 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2464712668 start -p stopped-upgrade-653869 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (47.593141851s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2464712668 -p stopped-upgrade-653869 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2464712668 -p stopped-upgrade-653869 stop: (22.088521897s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-653869 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-653869 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (42.035209938s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (111.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.28s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-653869
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-653869: (1.275045616s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-008030 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-008030 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (206.192759ms)

                                                
                                                
-- stdout --
	* [false-008030] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20317
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20317-1181389/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-1181389/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 13:13:37.732186 1373711 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:13:37.732385 1373711 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:13:37.732417 1373711 out.go:358] Setting ErrFile to fd 2...
	I0127 13:13:37.732437 1373711 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:13:37.732794 1373711 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-1181389/.minikube/bin
	I0127 13:13:37.733788 1373711 out.go:352] Setting JSON to false
	I0127 13:13:37.734975 1373711 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":21363,"bootTime":1737962255,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0127 13:13:37.735087 1373711 start.go:139] virtualization:  
	I0127 13:13:37.738915 1373711 out.go:177] * [false-008030] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0127 13:13:37.742431 1373711 out.go:177]   - MINIKUBE_LOCATION=20317
	I0127 13:13:37.742470 1373711 notify.go:220] Checking for updates...
	I0127 13:13:37.748130 1373711 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 13:13:37.750813 1373711 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20317-1181389/kubeconfig
	I0127 13:13:37.753330 1373711 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-1181389/.minikube
	I0127 13:13:37.755881 1373711 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0127 13:13:37.758510 1373711 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 13:13:37.761826 1373711 config.go:182] Loaded profile config "force-systemd-flag-334749": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 13:13:37.761997 1373711 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 13:13:37.784964 1373711 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 13:13:37.785162 1373711 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 13:13:37.854582 1373711 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:43 SystemTime:2025-01-27 13:13:37.845742842 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 13:13:37.854687 1373711 docker.go:318] overlay module found
	I0127 13:13:37.860237 1373711 out.go:177] * Using the docker driver based on user configuration
	I0127 13:13:37.863180 1373711 start.go:297] selected driver: docker
	I0127 13:13:37.863201 1373711 start.go:901] validating driver "docker" against <nil>
	I0127 13:13:37.863216 1373711 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 13:13:37.866235 1373711 out.go:201] 
	W0127 13:13:37.868808 1373711 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0127 13:13:37.871476 1373711 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-008030 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-008030

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-008030

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-008030

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-008030

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-008030

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-008030

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-008030

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-008030

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-008030

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-008030

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-008030"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-008030"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-008030"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-008030

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-008030"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-008030"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-008030" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-008030" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-008030" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-008030" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-008030" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-008030" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-008030" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-008030" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-008030"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-008030"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-008030"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-008030"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-008030"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-008030" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-008030" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-008030" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-008030"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-008030"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-008030"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-008030"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-008030"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-008030

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-008030"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-008030"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-008030"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-008030"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-008030"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-008030"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-008030"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-008030"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-008030"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-008030"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-008030"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-008030"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-008030"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-008030"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-008030"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-008030"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-008030"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-008030"

                                                
                                                
----------------------- debugLogs end: false-008030 [took: 4.811051202s] --------------------------------
helpers_test.go:175: Cleaning up "false-008030" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-008030
--- PASS: TestNetworkPlugins/group/false (5.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (167.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-813213 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0127 13:15:33.634315 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/addons-453723/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:16:55.561851 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/functional-547155/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-813213 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m47.412727908s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (167.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (63.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-181914 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-181914 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (1m3.527134594s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (63.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-813213 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f8f0a093-200c-4373-8f95-6add1c05b9ba] Pending
helpers_test.go:344: "busybox" [f8f0a093-200c-4373-8f95-6add1c05b9ba] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f8f0a093-200c-4373-8f95-6add1c05b9ba] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004349881s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-813213 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-813213 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-813213 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.486312894s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-813213 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-813213 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-813213 --alsologtostderr -v=3: (12.335781371s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-813213 -n old-k8s-version-813213
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-813213 -n old-k8s-version-813213: exit status 7 (114.17324ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-813213 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-181914 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [38a69ca2-41e2-4f7e-806c-9c1cf6795943] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [38a69ca2-41e2-4f7e-806c-9c1cf6795943] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004771424s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-181914 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-181914 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-181914 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.059672038s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-181914 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-181914 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-181914 --alsologtostderr -v=3: (12.599021699s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-181914 -n no-preload-181914
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-181914 -n no-preload-181914: exit status 7 (75.847409ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-181914 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (288.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-181914 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 13:19:58.631063 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/functional-547155/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:20:33.634637 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/addons-453723/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:21:55.562044 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/functional-547155/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-181914 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (4m48.317896925s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-181914 -n no-preload-181914
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (288.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-6598c" [8a5f08aa-0045-4b56-b141-3518343f6a47] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005058933s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-6598c" [8a5f08aa-0045-4b56-b141-3518343f6a47] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004372733s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-181914 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-181914 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-181914 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-181914 -n no-preload-181914
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-181914 -n no-preload-181914: exit status 2 (335.393375ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-181914 -n no-preload-181914
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-181914 -n no-preload-181914: exit status 2 (337.86522ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-181914 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-181914 -n no-preload-181914
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-181914 -n no-preload-181914
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (92.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-434512 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-434512 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (1m32.791101645s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (92.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-r9xkk" [77ed34ef-6cf8-4110-af5b-65b4ea7e8c9d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00354276s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-r9xkk" [77ed34ef-6cf8-4110-af5b-65b4ea7e8c9d] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00335515s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-813213 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-813213 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-813213 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-813213 --alsologtostderr -v=1: (1.204776569s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-813213 -n old-k8s-version-813213
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-813213 -n old-k8s-version-813213: exit status 2 (521.378489ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-813213 -n old-k8s-version-813213
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-813213 -n old-k8s-version-813213: exit status 2 (464.721804ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-813213 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-813213 --alsologtostderr -v=1: (1.121691887s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-813213 -n old-k8s-version-813213
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-813213 -n old-k8s-version-813213
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (55.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-800335 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 13:25:33.634493 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/addons-453723/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-800335 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (55.623121746s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (55.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-800335 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3ce00c69-11b2-45a7-b75b-9b900beb7e73] Pending
helpers_test.go:344: "busybox" [3ce00c69-11b2-45a7-b75b-9b900beb7e73] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3ce00c69-11b2-45a7-b75b-9b900beb7e73] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.005807327s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-800335 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-434512 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3d25e364-3a10-49ac-9a9c-89349126b8d5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3d25e364-3a10-49ac-9a9c-89349126b8d5] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.010072745s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-434512 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-800335 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-800335 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.006345697s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-800335 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-800335 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-800335 --alsologtostderr -v=3: (12.138036711s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-434512 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-434512 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.230458302s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-434512 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-434512 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-434512 --alsologtostderr -v=3: (11.922807713s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-800335 -n default-k8s-diff-port-800335
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-800335 -n default-k8s-diff-port-800335: exit status 7 (73.629041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-800335 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (292.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-800335 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-800335 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (4m51.928945135s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-800335 -n default-k8s-diff-port-800335
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (292.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-434512 -n embed-certs-434512
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-434512 -n embed-certs-434512: exit status 7 (73.287528ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-434512 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (271.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-434512 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 13:26:55.561392 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/functional-547155/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:47.592771 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/old-k8s-version-813213/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:47.599297 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/old-k8s-version-813213/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:47.610664 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/old-k8s-version-813213/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:47.632095 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/old-k8s-version-813213/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:47.673417 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/old-k8s-version-813213/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:47.754855 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/old-k8s-version-813213/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:47.916362 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/old-k8s-version-813213/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:48.238034 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/old-k8s-version-813213/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:48.880100 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/old-k8s-version-813213/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:50.162318 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/old-k8s-version-813213/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:52.723681 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/old-k8s-version-813213/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:57.846076 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/old-k8s-version-813213/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:08.089071 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/old-k8s-version-813213/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:28.570396 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/old-k8s-version-813213/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:44.417020 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/no-preload-181914/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:44.423468 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/no-preload-181914/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:44.434935 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/no-preload-181914/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:44.456320 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/no-preload-181914/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:44.497820 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/no-preload-181914/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:44.579340 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/no-preload-181914/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:44.741000 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/no-preload-181914/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:45.062421 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/no-preload-181914/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:45.704989 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/no-preload-181914/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:46.986402 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/no-preload-181914/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:49.547841 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/no-preload-181914/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:54.669875 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/no-preload-181914/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:29:04.911604 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/no-preload-181914/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:29:09.532517 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/old-k8s-version-813213/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:29:25.393476 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/no-preload-181914/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:30:06.354853 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/no-preload-181914/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:30:16.702354 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/addons-453723/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:30:31.454192 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/old-k8s-version-813213/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:30:33.634655 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/addons-453723/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-434512 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (4m31.525416046s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-434512 -n embed-certs-434512
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (271.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-hpt6f" [4ecba77d-cc51-48e8-b4c6-e810088ffba0] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002968197s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-hpt6f" [4ecba77d-cc51-48e8-b4c6-e810088ffba0] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003566373s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-434512 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-434512 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-434512 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-434512 -n embed-certs-434512
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-434512 -n embed-certs-434512: exit status 2 (312.440887ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-434512 -n embed-certs-434512
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-434512 -n embed-certs-434512: exit status 2 (325.903813ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-434512 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-434512 -n embed-certs-434512
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-434512 -n embed-certs-434512
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (45.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-903950 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-903950 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (45.340042302s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (45.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-2bmxc" [13f3c6c7-5739-4239-a993-93487de2689f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003355378s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-2bmxc" [13f3c6c7-5739-4239-a993-93487de2689f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004089347s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-800335 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-800335 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-800335 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-800335 --alsologtostderr -v=1: (1.016718403s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-800335 -n default-k8s-diff-port-800335
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-800335 -n default-k8s-diff-port-800335: exit status 2 (457.515188ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-800335 -n default-k8s-diff-port-800335
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-800335 -n default-k8s-diff-port-800335: exit status 2 (490.455601ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-800335 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-800335 -n default-k8s-diff-port-800335
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-800335 -n default-k8s-diff-port-800335
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (96.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-008030 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E0127 13:31:28.276989 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/no-preload-181914/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-008030 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m36.78923411s)
--- PASS: TestNetworkPlugins/group/auto/Start (96.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-903950 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-903950 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.316430648s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-903950 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-903950 --alsologtostderr -v=3: (1.281825713s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-903950 -n newest-cni-903950
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-903950 -n newest-cni-903950: exit status 7 (91.755868ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-903950 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (21.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-903950 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 13:31:55.562130 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/functional-547155/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-903950 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (20.798014719s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-903950 -n newest-cni-903950
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (21.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-903950 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-903950 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-903950 -n newest-cni-903950
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-903950 -n newest-cni-903950: exit status 2 (310.267615ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-903950 -n newest-cni-903950
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-903950 -n newest-cni-903950: exit status 2 (356.824887ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-903950 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-903950 -n newest-cni-903950
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-903950 -n newest-cni-903950
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (64.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-008030 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0127 13:32:47.592898 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/old-k8s-version-813213/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-008030 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m4.037742381s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (64.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-008030 "pgrep -a kubelet"
I0127 13:32:56.432751 1186773 config.go:182] Loaded profile config "auto-008030": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-008030 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-xw5hw" [a62ad260-eea1-410f-8124-8a1b6a5f1d47] Pending
helpers_test.go:344: "netcat-5d86dc444-xw5hw" [a62ad260-eea1-410f-8124-8a1b6a5f1d47] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.005671958s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-008030 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-008030 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-008030 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-z6g59" [0e714d12-a1f9-4876-8d20-00bff88bfa6a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.0039304s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-008030 "pgrep -a kubelet"
I0127 13:33:23.644187 1186773 config.go:182] Loaded profile config "kindnet-008030": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-008030 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-crhrh" [6a1e87c3-53a1-44dd-b2dd-933021e40264] Pending
helpers_test.go:344: "netcat-5d86dc444-crhrh" [6a1e87c3-53a1-44dd-b2dd-933021e40264] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-crhrh" [6a1e87c3-53a1-44dd-b2dd-933021e40264] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003884957s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (72.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-008030 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-008030 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m12.876721277s)
--- PASS: TestNetworkPlugins/group/calico/Start (72.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-008030 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-008030 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-008030 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (58.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-008030 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0127 13:34:12.118753 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/no-preload-181914/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-008030 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (58.015182629s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (58.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-tl7vj" [d0b1cb3c-a274-4d28-9ba0-ab1d60fda39f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005025848s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-008030 "pgrep -a kubelet"
I0127 13:34:48.376326 1186773 config.go:182] Loaded profile config "calico-008030": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-008030 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-l45xm" [e9938fba-b693-4278-8f1e-21f8e10c04d0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-l45xm" [e9938fba-b693-4278-8f1e-21f8e10c04d0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.003876717s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-008030 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-008030 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-008030 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-008030 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-008030 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-2sf9k" [22630be0-c448-4642-9f1f-a3b5eb8a659b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-2sf9k" [22630be0-c448-4642-9f1f-a3b5eb8a659b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.007011865s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-008030 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-008030 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-008030 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (55.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-008030 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-008030 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (55.413467448s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (55.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (56.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-008030 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0127 13:35:45.197504 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/default-k8s-diff-port-800335/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:35:45.203859 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/default-k8s-diff-port-800335/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:35:45.215205 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/default-k8s-diff-port-800335/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:35:45.236645 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/default-k8s-diff-port-800335/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:35:45.278217 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/default-k8s-diff-port-800335/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:35:45.359673 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/default-k8s-diff-port-800335/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:35:45.521330 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/default-k8s-diff-port-800335/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:35:45.843294 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/default-k8s-diff-port-800335/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:35:46.484970 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/default-k8s-diff-port-800335/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:35:47.767361 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/default-k8s-diff-port-800335/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:35:50.328949 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/default-k8s-diff-port-800335/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:35:55.451109 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/default-k8s-diff-port-800335/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:36:05.692689 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/default-k8s-diff-port-800335/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-008030 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (56.133263552s)
--- PASS: TestNetworkPlugins/group/flannel/Start (56.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-008030 "pgrep -a kubelet"
I0127 13:36:18.864105 1186773 config.go:182] Loaded profile config "enable-default-cni-008030": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-008030 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-sd564" [0c1fa066-e529-4503-b9e4-4b8718aa1f7a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-sd564" [0c1fa066-e529-4503-b9e4-4b8718aa1f7a] Running
E0127 13:36:26.174112 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/default-k8s-diff-port-800335/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.005391672s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-008030 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-008030 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-008030 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-wnmwh" [2b87b658-6a3e-45c9-9d98-16e329c04d2c] Running
E0127 13:36:38.632884 1186773 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/functional-547155/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00575586s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-008030 "pgrep -a kubelet"
I0127 13:36:39.280054 1186773 config.go:182] Loaded profile config "flannel-008030": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-008030 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-v2xlf" [4b3f768a-b3cf-4baf-a53f-14a75eaf21c6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-v2xlf" [4b3f768a-b3cf-4baf-a53f-14a75eaf21c6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004863839s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-008030 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-008030 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (46.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-008030 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-008030 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (46.586129662s)
--- PASS: TestNetworkPlugins/group/bridge/Start (46.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-008030 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-008030 "pgrep -a kubelet"
I0127 13:37:36.792794 1186773 config.go:182] Loaded profile config "bridge-008030": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-008030 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-6cppw" [e317d6f7-1eae-496c-9c42-aab19d7b231f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-6cppw" [e317d6f7-1eae-496c-9c42-aab19d7b231f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004363979s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-008030 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-008030 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-008030 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (29/330)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.58s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-609131 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-609131" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-609131
--- SKIP: TestDownloadOnlyKic (0.58s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-799729" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-799729
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-008030 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-008030

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-008030

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-008030

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-008030

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-008030

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-008030

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-008030

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-008030

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-008030

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-008030

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-008030"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-008030"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-008030"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-008030

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-008030"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-008030"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-008030" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-008030" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-008030" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-008030" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-008030" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-008030" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-008030" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-008030" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-008030"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-008030"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-008030"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-008030"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-008030"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-008030" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-008030" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-008030" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-008030"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-008030"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-008030"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-008030"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-008030"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20317-1181389/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Jan 2025 13:13:33 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.76.2:8443
name: force-systemd-flag-334749
contexts:
- context:
cluster: force-systemd-flag-334749
extensions:
- extension:
last-update: Mon, 27 Jan 2025 13:13:33 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: force-systemd-flag-334749
name: force-systemd-flag-334749
current-context: force-systemd-flag-334749
kind: Config
preferences: {}
users:
- name: force-systemd-flag-334749
user:
client-certificate: /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/force-systemd-flag-334749/client.crt
client-key: /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/force-systemd-flag-334749/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-008030

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-008030"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-008030"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-008030"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-008030"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-008030"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-008030"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-008030"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-008030"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-008030"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-008030"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-008030"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-008030"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-008030"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-008030"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-008030"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-008030"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-008030"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-008030"

                                                
                                                
----------------------- debugLogs end: kubenet-008030 [took: 5.347351311s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-008030" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-008030
--- SKIP: TestNetworkPlugins/group/kubenet (5.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-008030 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-008030

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-008030

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-008030

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-008030

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-008030

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-008030

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-008030

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-008030

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-008030

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-008030

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008030"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008030"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008030"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-008030

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008030"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008030"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-008030" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-008030" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-008030" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-008030" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-008030" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-008030" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-008030" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-008030" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008030"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008030"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008030"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008030"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008030"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-008030

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-008030

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-008030" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-008030" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-008030

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-008030

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-008030" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-008030" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-008030" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-008030" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-008030" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008030"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008030"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008030"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008030"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008030"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-008030

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008030"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008030"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008030"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008030"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008030"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008030"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008030"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008030"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008030"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008030"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008030"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008030"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008030"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008030"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008030"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008030"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008030"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-008030" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008030"

                                                
                                                
----------------------- debugLogs end: cilium-008030 [took: 5.69007351s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-008030" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-008030
--- SKIP: TestNetworkPlugins/group/cilium (5.90s)

                                                
                                    
Copied to clipboard