Test Report: Docker_Linux_containerd_arm64 19888

                    
                      b240f9d77986126e9714444475c34e6cc49a474f:2024-12-09:37414
                    
                

Test fail (1/330)

Order failed test Duration
304 TestStartStop/group/old-k8s-version/serial/SecondStart 382.58
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (382.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-098617 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E1209 23:14:23.257917    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/functional-463603/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:14:49.556284    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/addons-013873/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-098617 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m18.207023195s)

                                                
                                                
-- stdout --
	* [old-k8s-version-098617] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19888
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19888-2244/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-2244/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-098617" primary control-plane node in "old-k8s-version-098617" cluster
	* Pulling base image v0.0.45-1730888964-19917 ...
	* Restarting existing docker container for "old-k8s-version-098617" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	* Verifying Kubernetes components...
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-098617 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 23:14:11.391285  214436 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:14:11.391523  214436 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:14:11.391549  214436 out.go:358] Setting ErrFile to fd 2...
	I1209 23:14:11.391568  214436 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:14:11.391860  214436 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-2244/.minikube/bin
	I1209 23:14:11.392303  214436 out.go:352] Setting JSON to false
	I1209 23:14:11.393201  214436 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3398,"bootTime":1733782653,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1209 23:14:11.393301  214436 start.go:139] virtualization:  
	I1209 23:14:11.395535  214436 out.go:177] * [old-k8s-version-098617] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1209 23:14:11.397129  214436 out.go:177]   - MINIKUBE_LOCATION=19888
	I1209 23:14:11.397204  214436 notify.go:220] Checking for updates...
	I1209 23:14:11.399531  214436 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 23:14:11.402745  214436 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19888-2244/kubeconfig
	I1209 23:14:11.405419  214436 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-2244/.minikube
	I1209 23:14:11.408748  214436 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 23:14:11.411805  214436 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 23:14:11.414269  214436 config.go:182] Loaded profile config "old-k8s-version-098617": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1209 23:14:11.417089  214436 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1209 23:14:11.419067  214436 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 23:14:11.464567  214436 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1209 23:14:11.464682  214436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 23:14:11.556991  214436 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:59 SystemTime:2024-12-09 23:14:11.547833129 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1209 23:14:11.557105  214436 docker.go:318] overlay module found
	I1209 23:14:11.561167  214436 out.go:177] * Using the docker driver based on existing profile
	I1209 23:14:11.562934  214436 start.go:297] selected driver: docker
	I1209 23:14:11.562949  214436 start.go:901] validating driver "docker" against &{Name:old-k8s-version-098617 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-098617 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:14:11.563068  214436 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 23:14:11.563778  214436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 23:14:11.634037  214436 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:59 SystemTime:2024-12-09 23:14:11.624366058 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1209 23:14:11.634458  214436 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 23:14:11.634486  214436 cni.go:84] Creating CNI manager for ""
	I1209 23:14:11.634531  214436 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 23:14:11.634572  214436 start.go:340] cluster config:
	{Name:old-k8s-version-098617 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-098617 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:14:11.637401  214436 out.go:177] * Starting "old-k8s-version-098617" primary control-plane node in "old-k8s-version-098617" cluster
	I1209 23:14:11.638884  214436 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1209 23:14:11.640411  214436 out.go:177] * Pulling base image v0.0.45-1730888964-19917 ...
	I1209 23:14:11.641827  214436 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1209 23:14:11.641887  214436 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19888-2244/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I1209 23:14:11.641896  214436 cache.go:56] Caching tarball of preloaded images
	I1209 23:14:11.641982  214436 preload.go:172] Found /home/jenkins/minikube-integration/19888-2244/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 23:14:11.641993  214436 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I1209 23:14:11.642103  214436 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/old-k8s-version-098617/config.json ...
	I1209 23:14:11.642309  214436 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1209 23:14:11.670121  214436 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon, skipping pull
	I1209 23:14:11.670146  214436 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in daemon, skipping load
	I1209 23:14:11.670161  214436 cache.go:194] Successfully downloaded all kic artifacts
	I1209 23:14:11.670184  214436 start.go:360] acquireMachinesLock for old-k8s-version-098617: {Name:mk653849e4ebf1e5c8bcd0acd3ea80cca1cdb2c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 23:14:11.670246  214436 start.go:364] duration metric: took 37.284µs to acquireMachinesLock for "old-k8s-version-098617"
	I1209 23:14:11.670273  214436 start.go:96] Skipping create...Using existing machine configuration
	I1209 23:14:11.670282  214436 fix.go:54] fixHost starting: 
	I1209 23:14:11.670528  214436 cli_runner.go:164] Run: docker container inspect old-k8s-version-098617 --format={{.State.Status}}
	I1209 23:14:11.700397  214436 fix.go:112] recreateIfNeeded on old-k8s-version-098617: state=Stopped err=<nil>
	W1209 23:14:11.700425  214436 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 23:14:11.702392  214436 out.go:177] * Restarting existing docker container for "old-k8s-version-098617" ...
	I1209 23:14:11.703841  214436 cli_runner.go:164] Run: docker start old-k8s-version-098617
	I1209 23:14:12.053321  214436 cli_runner.go:164] Run: docker container inspect old-k8s-version-098617 --format={{.State.Status}}
	I1209 23:14:12.087469  214436 kic.go:430] container "old-k8s-version-098617" state is running.
	I1209 23:14:12.088004  214436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-098617
	I1209 23:14:12.126927  214436 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/old-k8s-version-098617/config.json ...
	I1209 23:14:12.127237  214436 machine.go:93] provisionDockerMachine start ...
	I1209 23:14:12.127320  214436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098617
	I1209 23:14:12.162611  214436 main.go:141] libmachine: Using SSH client type: native
	I1209 23:14:12.162965  214436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415f50] 0x418790 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1209 23:14:12.162983  214436 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 23:14:12.165237  214436 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1209 23:14:15.298259  214436 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-098617
	
	I1209 23:14:15.298287  214436 ubuntu.go:169] provisioning hostname "old-k8s-version-098617"
	I1209 23:14:15.298366  214436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098617
	I1209 23:14:15.325145  214436 main.go:141] libmachine: Using SSH client type: native
	I1209 23:14:15.325420  214436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415f50] 0x418790 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1209 23:14:15.325438  214436 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-098617 && echo "old-k8s-version-098617" | sudo tee /etc/hostname
	I1209 23:14:15.468496  214436 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-098617
	
	I1209 23:14:15.468668  214436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098617
	I1209 23:14:15.497061  214436 main.go:141] libmachine: Using SSH client type: native
	I1209 23:14:15.497313  214436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415f50] 0x418790 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1209 23:14:15.497339  214436 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-098617' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-098617/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-098617' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 23:14:15.622957  214436 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:14:15.622987  214436 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19888-2244/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-2244/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-2244/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-2244/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-2244/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-2244/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-2244/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-2244/.minikube}
	I1209 23:14:15.623013  214436 ubuntu.go:177] setting up certificates
	I1209 23:14:15.623024  214436 provision.go:84] configureAuth start
	I1209 23:14:15.623086  214436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-098617
	I1209 23:14:15.642686  214436 provision.go:143] copyHostCerts
	I1209 23:14:15.642797  214436 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-2244/.minikube/cert.pem, removing ...
	I1209 23:14:15.642818  214436 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-2244/.minikube/cert.pem
	I1209 23:14:15.642893  214436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-2244/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-2244/.minikube/cert.pem (1123 bytes)
	I1209 23:14:15.643007  214436 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-2244/.minikube/key.pem, removing ...
	I1209 23:14:15.643019  214436 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-2244/.minikube/key.pem
	I1209 23:14:15.643049  214436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-2244/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-2244/.minikube/key.pem (1675 bytes)
	I1209 23:14:15.643125  214436 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-2244/.minikube/ca.pem, removing ...
	I1209 23:14:15.643139  214436 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-2244/.minikube/ca.pem
	I1209 23:14:15.643166  214436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-2244/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-2244/.minikube/ca.pem (1078 bytes)
	I1209 23:14:15.643228  214436 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-2244/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-2244/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-2244/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-098617 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-098617]
	I1209 23:14:16.075534  214436 provision.go:177] copyRemoteCerts
	I1209 23:14:16.075664  214436 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 23:14:16.075747  214436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098617
	I1209 23:14:16.094331  214436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19888-2244/.minikube/machines/old-k8s-version-098617/id_rsa Username:docker}
	I1209 23:14:16.196765  214436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-2244/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 23:14:16.221393  214436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-2244/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1209 23:14:16.253256  214436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-2244/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 23:14:16.311867  214436 provision.go:87] duration metric: took 688.824014ms to configureAuth
	I1209 23:14:16.311894  214436 ubuntu.go:193] setting minikube options for container-runtime
	I1209 23:14:16.312127  214436 config.go:182] Loaded profile config "old-k8s-version-098617": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1209 23:14:16.312144  214436 machine.go:96] duration metric: took 4.184897091s to provisionDockerMachine
	I1209 23:14:16.312154  214436 start.go:293] postStartSetup for "old-k8s-version-098617" (driver="docker")
	I1209 23:14:16.312178  214436 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 23:14:16.313227  214436 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 23:14:16.313307  214436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098617
	I1209 23:14:16.341954  214436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19888-2244/.minikube/machines/old-k8s-version-098617/id_rsa Username:docker}
	I1209 23:14:16.438869  214436 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 23:14:16.444177  214436 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 23:14:16.444208  214436 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1209 23:14:16.444219  214436 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1209 23:14:16.444226  214436 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1209 23:14:16.444237  214436 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-2244/.minikube/addons for local assets ...
	I1209 23:14:16.444291  214436 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-2244/.minikube/files for local assets ...
	I1209 23:14:16.444368  214436 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-2244/.minikube/files/etc/ssl/certs/76842.pem -> 76842.pem in /etc/ssl/certs
	I1209 23:14:16.444471  214436 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 23:14:16.455543  214436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-2244/.minikube/files/etc/ssl/certs/76842.pem --> /etc/ssl/certs/76842.pem (1708 bytes)
	I1209 23:14:16.493529  214436 start.go:296] duration metric: took 181.346912ms for postStartSetup
	I1209 23:14:16.493609  214436 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 23:14:16.493670  214436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098617
	I1209 23:14:16.513477  214436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19888-2244/.minikube/machines/old-k8s-version-098617/id_rsa Username:docker}
	I1209 23:14:16.611231  214436 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 23:14:16.619129  214436 fix.go:56] duration metric: took 4.948839099s for fixHost
	I1209 23:14:16.619156  214436 start.go:83] releasing machines lock for "old-k8s-version-098617", held for 4.948897133s
	I1209 23:14:16.619226  214436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-098617
	I1209 23:14:16.741129  214436 ssh_runner.go:195] Run: cat /version.json
	I1209 23:14:16.741179  214436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098617
	I1209 23:14:16.741431  214436 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 23:14:16.741488  214436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098617
	I1209 23:14:16.820320  214436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19888-2244/.minikube/machines/old-k8s-version-098617/id_rsa Username:docker}
	I1209 23:14:16.913284  214436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19888-2244/.minikube/machines/old-k8s-version-098617/id_rsa Username:docker}
	I1209 23:14:16.946999  214436 ssh_runner.go:195] Run: systemctl --version
	I1209 23:14:16.962105  214436 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1209 23:14:17.219065  214436 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1209 23:14:17.252608  214436 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1209 23:14:17.252693  214436 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 23:14:17.269920  214436 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 23:14:17.269945  214436 start.go:495] detecting cgroup driver to use...
	I1209 23:14:17.269979  214436 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 23:14:17.270031  214436 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1209 23:14:17.292290  214436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 23:14:17.313786  214436 docker.go:217] disabling cri-docker service (if available) ...
	I1209 23:14:17.313851  214436 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 23:14:17.333858  214436 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 23:14:17.356785  214436 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 23:14:17.508643  214436 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 23:14:17.648439  214436 docker.go:233] disabling docker service ...
	I1209 23:14:17.648509  214436 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 23:14:17.662611  214436 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 23:14:17.676247  214436 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 23:14:17.784732  214436 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 23:14:17.900348  214436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 23:14:17.914362  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 23:14:17.939373  214436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1209 23:14:17.950041  214436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1209 23:14:17.962877  214436 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1209 23:14:17.962952  214436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1209 23:14:17.973508  214436 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 23:14:17.988371  214436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1209 23:14:18.002290  214436 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 23:14:18.016703  214436 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 23:14:18.027810  214436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1209 23:14:18.039465  214436 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 23:14:18.049980  214436 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 23:14:18.060540  214436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:14:18.164068  214436 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1209 23:14:18.369830  214436 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1209 23:14:18.369976  214436 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1209 23:14:18.377528  214436 start.go:563] Will wait 60s for crictl version
	I1209 23:14:18.377645  214436 ssh_runner.go:195] Run: which crictl
	I1209 23:14:18.381626  214436 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 23:14:18.429420  214436 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I1209 23:14:18.429569  214436 ssh_runner.go:195] Run: containerd --version
	I1209 23:14:18.452893  214436 ssh_runner.go:195] Run: containerd --version
	I1209 23:14:18.476220  214436 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	I1209 23:14:18.477833  214436 cli_runner.go:164] Run: docker network inspect old-k8s-version-098617 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 23:14:18.503995  214436 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1209 23:14:18.510452  214436 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:14:18.523269  214436 kubeadm.go:883] updating cluster {Name:old-k8s-version-098617 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-098617 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 23:14:18.523381  214436 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1209 23:14:18.523439  214436 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:14:18.567611  214436 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 23:14:18.567633  214436 containerd.go:534] Images already preloaded, skipping extraction
	I1209 23:14:18.567704  214436 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:14:18.616359  214436 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 23:14:18.616429  214436 cache_images.go:84] Images are preloaded, skipping loading
	I1209 23:14:18.616467  214436 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
	I1209 23:14:18.616628  214436 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-098617 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-098617 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 23:14:18.616731  214436 ssh_runner.go:195] Run: sudo crictl info
	I1209 23:14:18.665999  214436 cni.go:84] Creating CNI manager for ""
	I1209 23:14:18.666019  214436 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 23:14:18.666028  214436 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 23:14:18.666049  214436 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-098617 NodeName:old-k8s-version-098617 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1209 23:14:18.666177  214436 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-098617"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 23:14:18.666244  214436 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1209 23:14:18.678867  214436 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 23:14:18.678984  214436 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 23:14:18.688170  214436 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I1209 23:14:18.712331  214436 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 23:14:18.736122  214436 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I1209 23:14:18.756576  214436 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1209 23:14:18.760308  214436 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:14:18.770863  214436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:14:18.884998  214436 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:14:18.905477  214436 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/old-k8s-version-098617 for IP: 192.168.76.2
	I1209 23:14:18.905500  214436 certs.go:194] generating shared ca certs ...
	I1209 23:14:18.905517  214436 certs.go:226] acquiring lock for ca certs: {Name:mk5e5b08227e0c37038d2f29a9a492383a5cd230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:14:18.905651  214436 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-2244/.minikube/ca.key
	I1209 23:14:18.905707  214436 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-2244/.minikube/proxy-client-ca.key
	I1209 23:14:18.905717  214436 certs.go:256] generating profile certs ...
	I1209 23:14:18.905802  214436 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/old-k8s-version-098617/client.key
	I1209 23:14:18.905865  214436 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/old-k8s-version-098617/apiserver.key.982d6abc
	I1209 23:14:18.905913  214436 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/old-k8s-version-098617/proxy-client.key
	I1209 23:14:18.906034  214436 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-2244/.minikube/certs/7684.pem (1338 bytes)
	W1209 23:14:18.906069  214436 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-2244/.minikube/certs/7684_empty.pem, impossibly tiny 0 bytes
	I1209 23:14:18.906080  214436 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-2244/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 23:14:18.906104  214436 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-2244/.minikube/certs/ca.pem (1078 bytes)
	I1209 23:14:18.906132  214436 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-2244/.minikube/certs/cert.pem (1123 bytes)
	I1209 23:14:18.906156  214436 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-2244/.minikube/certs/key.pem (1675 bytes)
	I1209 23:14:18.906204  214436 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-2244/.minikube/files/etc/ssl/certs/76842.pem (1708 bytes)
	I1209 23:14:18.906853  214436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-2244/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 23:14:18.945538  214436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-2244/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 23:14:18.991946  214436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-2244/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 23:14:19.032635  214436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-2244/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 23:14:19.089669  214436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/old-k8s-version-098617/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1209 23:14:19.117826  214436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/old-k8s-version-098617/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 23:14:19.144134  214436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/old-k8s-version-098617/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 23:14:19.170101  214436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/old-k8s-version-098617/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 23:14:19.195283  214436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-2244/.minikube/files/etc/ssl/certs/76842.pem --> /usr/share/ca-certificates/76842.pem (1708 bytes)
	I1209 23:14:19.220701  214436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-2244/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 23:14:19.245981  214436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-2244/.minikube/certs/7684.pem --> /usr/share/ca-certificates/7684.pem (1338 bytes)
	I1209 23:14:19.270581  214436 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 23:14:19.289767  214436 ssh_runner.go:195] Run: openssl version
	I1209 23:14:19.295727  214436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/76842.pem && ln -fs /usr/share/ca-certificates/76842.pem /etc/ssl/certs/76842.pem"
	I1209 23:14:19.305797  214436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/76842.pem
	I1209 23:14:19.309614  214436 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:34 /usr/share/ca-certificates/76842.pem
	I1209 23:14:19.309678  214436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/76842.pem
	I1209 23:14:19.317916  214436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/76842.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 23:14:19.327464  214436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 23:14:19.337436  214436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:14:19.341829  214436 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:26 /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:14:19.341900  214436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:14:19.349014  214436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 23:14:19.358319  214436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7684.pem && ln -fs /usr/share/ca-certificates/7684.pem /etc/ssl/certs/7684.pem"
	I1209 23:14:19.368341  214436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7684.pem
	I1209 23:14:19.372146  214436 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:34 /usr/share/ca-certificates/7684.pem
	I1209 23:14:19.372221  214436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7684.pem
	I1209 23:14:19.379427  214436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7684.pem /etc/ssl/certs/51391683.0"
	I1209 23:14:19.389001  214436 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 23:14:19.393080  214436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 23:14:19.400365  214436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 23:14:19.407476  214436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 23:14:19.414411  214436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 23:14:19.421615  214436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 23:14:19.428805  214436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 23:14:19.435961  214436 kubeadm.go:392] StartCluster: {Name:old-k8s-version-098617 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-098617 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:14:19.436063  214436 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1209 23:14:19.436125  214436 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:14:19.489532  214436 cri.go:89] found id: "7b6f900a1282a6756e0904630740646ec98f08e7e8e41c3c55e56a30dba7bc7a"
	I1209 23:14:19.489554  214436 cri.go:89] found id: "99d9ed2f5b230d0319b2465314133899b5c950239a3b96d9f4feb405f1b18650"
	I1209 23:14:19.489560  214436 cri.go:89] found id: "5125ce4b5b492867a27c0ac7a7b0e99ee7d2c899aba434ce230c8fe5eb273f38"
	I1209 23:14:19.489564  214436 cri.go:89] found id: "4298e59fb9c26bc4b6c5f5daf349a3292840d4b30dcb1cb11c299810d0ed0451"
	I1209 23:14:19.489567  214436 cri.go:89] found id: "a33fca2389d21d809231c03f4d59c7c6edd2b935f0a6bee69e06642b5d121aad"
	I1209 23:14:19.489571  214436 cri.go:89] found id: "5693d8f440cbbfff0094faddfc750157e27b436336b58b60026d6e4b6afb7c19"
	I1209 23:14:19.489574  214436 cri.go:89] found id: "063e1c49d2c947a9f19b4fae6421961ea9a67ce263a258ff3303dcc0ab203f1c"
	I1209 23:14:19.489577  214436 cri.go:89] found id: "f68628204e6f96e082e5b48a6dc631b0a69e7de46bf5da75a9ca3e6911da3442"
	I1209 23:14:19.489580  214436 cri.go:89] found id: "6d1ffef5c3c113df0c1c5643d627d680020df30e159ab4a69ccf738c6f7c09e6"
	I1209 23:14:19.489585  214436 cri.go:89] found id: ""
	I1209 23:14:19.489639  214436 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1209 23:14:19.502454  214436 cri.go:116] JSON = null
	W1209 23:14:19.502502  214436 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 9
	I1209 23:14:19.502565  214436 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 23:14:19.513471  214436 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 23:14:19.513494  214436 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 23:14:19.513547  214436 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 23:14:19.522730  214436 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 23:14:19.523161  214436 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-098617" does not appear in /home/jenkins/minikube-integration/19888-2244/kubeconfig
	I1209 23:14:19.523268  214436 kubeconfig.go:62] /home/jenkins/minikube-integration/19888-2244/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-098617" cluster setting kubeconfig missing "old-k8s-version-098617" context setting]
	I1209 23:14:19.523580  214436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-2244/kubeconfig: {Name:mke0607d72baeb496e6e8b72464517e7e676b09e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:14:19.524796  214436 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 23:14:19.534596  214436 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I1209 23:14:19.534627  214436 kubeadm.go:597] duration metric: took 21.127623ms to restartPrimaryControlPlane
	I1209 23:14:19.534637  214436 kubeadm.go:394] duration metric: took 98.686693ms to StartCluster
	I1209 23:14:19.534651  214436 settings.go:142] acquiring lock: {Name:mk8e4d73490ddd425d99594b7cef42b0539f618d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:14:19.534718  214436 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19888-2244/kubeconfig
	I1209 23:14:19.535292  214436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-2244/kubeconfig: {Name:mke0607d72baeb496e6e8b72464517e7e676b09e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:14:19.535471  214436 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1209 23:14:19.535795  214436 config.go:182] Loaded profile config "old-k8s-version-098617": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1209 23:14:19.535961  214436 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 23:14:19.536118  214436 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-098617"
	I1209 23:14:19.536150  214436 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-098617"
	W1209 23:14:19.536245  214436 addons.go:243] addon storage-provisioner should already be in state true
	I1209 23:14:19.536284  214436 host.go:66] Checking if "old-k8s-version-098617" exists ...
	I1209 23:14:19.536172  214436 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-098617"
	I1209 23:14:19.536398  214436 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-098617"
	I1209 23:14:19.536676  214436 cli_runner.go:164] Run: docker container inspect old-k8s-version-098617 --format={{.State.Status}}
	I1209 23:14:19.536179  214436 addons.go:69] Setting dashboard=true in profile "old-k8s-version-098617"
	I1209 23:14:19.537593  214436 addons.go:234] Setting addon dashboard=true in "old-k8s-version-098617"
	W1209 23:14:19.537603  214436 addons.go:243] addon dashboard should already be in state true
	I1209 23:14:19.537627  214436 host.go:66] Checking if "old-k8s-version-098617" exists ...
	I1209 23:14:19.538097  214436 cli_runner.go:164] Run: docker container inspect old-k8s-version-098617 --format={{.State.Status}}
	I1209 23:14:19.541465  214436 out.go:177] * Verifying Kubernetes components...
	I1209 23:14:19.536204  214436 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-098617"
	I1209 23:14:19.541795  214436 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-098617"
	W1209 23:14:19.541828  214436 addons.go:243] addon metrics-server should already be in state true
	I1209 23:14:19.541879  214436 host.go:66] Checking if "old-k8s-version-098617" exists ...
	I1209 23:14:19.542310  214436 cli_runner.go:164] Run: docker container inspect old-k8s-version-098617 --format={{.State.Status}}
	I1209 23:14:19.542505  214436 cli_runner.go:164] Run: docker container inspect old-k8s-version-098617 --format={{.State.Status}}
	I1209 23:14:19.546834  214436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:14:19.595603  214436 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-098617"
	W1209 23:14:19.595626  214436 addons.go:243] addon default-storageclass should already be in state true
	I1209 23:14:19.595653  214436 host.go:66] Checking if "old-k8s-version-098617" exists ...
	I1209 23:14:19.596040  214436 cli_runner.go:164] Run: docker container inspect old-k8s-version-098617 --format={{.State.Status}}
	I1209 23:14:19.609310  214436 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1209 23:14:19.612224  214436 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 23:14:19.612258  214436 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 23:14:19.612329  214436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098617
	I1209 23:14:19.627957  214436 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1209 23:14:19.629236  214436 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:14:19.637559  214436 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1209 23:14:19.637663  214436 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 23:14:19.637684  214436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 23:14:19.637755  214436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098617
	I1209 23:14:19.644194  214436 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1209 23:14:19.644251  214436 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1209 23:14:19.644344  214436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098617
	I1209 23:14:19.680697  214436 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 23:14:19.680718  214436 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 23:14:19.680778  214436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098617
	I1209 23:14:19.691444  214436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19888-2244/.minikube/machines/old-k8s-version-098617/id_rsa Username:docker}
	I1209 23:14:19.714339  214436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19888-2244/.minikube/machines/old-k8s-version-098617/id_rsa Username:docker}
	I1209 23:14:19.717394  214436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19888-2244/.minikube/machines/old-k8s-version-098617/id_rsa Username:docker}
	I1209 23:14:19.743286  214436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19888-2244/.minikube/machines/old-k8s-version-098617/id_rsa Username:docker}
	I1209 23:14:19.807386  214436 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:14:19.872774  214436 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-098617" to be "Ready" ...
	I1209 23:14:19.890879  214436 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 23:14:19.890903  214436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1209 23:14:19.926556  214436 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 23:14:19.926577  214436 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 23:14:19.965972  214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 23:14:19.973581  214436 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 23:14:19.973652  214436 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 23:14:19.985040  214436 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1209 23:14:19.985118  214436 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1209 23:14:19.992759  214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 23:14:20.053067  214436 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1209 23:14:20.053230  214436 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1209 23:14:20.058520  214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 23:14:20.102351  214436 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1209 23:14:20.102441  214436 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1209 23:14:20.180760  214436 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1209 23:14:20.180834  214436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1209 23:14:20.298512  214436 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1209 23:14:20.298589  214436 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1209 23:14:20.318058  214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:20.318153  214436 retry.go:31] will retry after 368.964975ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1209 23:14:20.318218  214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:20.318251  214436 retry.go:31] will retry after 182.65675ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1209 23:14:20.334856  214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:20.334938  214436 retry.go:31] will retry after 354.416897ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:20.346439  214436 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1209 23:14:20.346512  214436 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1209 23:14:20.367645  214436 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1209 23:14:20.367723  214436 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1209 23:14:20.386968  214436 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1209 23:14:20.387041  214436 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1209 23:14:20.405663  214436 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 23:14:20.405737  214436 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1209 23:14:20.424580  214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 23:14:20.501836  214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 23:14:20.541340  214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:20.541442  214436 retry.go:31] will retry after 245.971621ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1209 23:14:20.623200  214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:20.623287  214436 retry.go:31] will retry after 320.731058ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:20.687552  214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 23:14:20.689896  214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 23:14:20.788199  214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 23:14:20.860958  214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:20.861126  214436 retry.go:31] will retry after 207.262482ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1209 23:14:20.861075  214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:20.861186  214436 retry.go:31] will retry after 245.686325ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1209 23:14:20.939480  214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:20.939582  214436 retry.go:31] will retry after 337.234046ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:20.944835  214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 23:14:21.041641  214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:21.041719  214436 retry.go:31] will retry after 480.106769ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:21.068934  214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 23:14:21.107416  214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1209 23:14:21.175048  214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:21.175081  214436 retry.go:31] will retry after 761.011958ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1209 23:14:21.270567  214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:21.270602  214436 retry.go:31] will retry after 698.060617ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:21.277899  214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 23:14:21.367222  214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:21.367257  214436 retry.go:31] will retry after 446.449156ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:21.522334  214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 23:14:21.619084  214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:21.619116  214436 retry.go:31] will retry after 430.642974ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:21.814387  214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 23:14:21.874049  214436 node_ready.go:53] error getting node "old-k8s-version-098617": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-098617": dial tcp 192.168.76.2:8443: connect: connection refused
	W1209 23:14:21.899026  214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:21.899068  214436 retry.go:31] will retry after 485.374595ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:21.936282  214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 23:14:21.969507  214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 23:14:22.050786  214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 23:14:22.067895  214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:22.067928  214436 retry.go:31] will retry after 917.617953ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1209 23:14:22.135708  214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:22.135742  214436 retry.go:31] will retry after 680.996206ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1209 23:14:22.180525  214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:22.180558  214436 retry.go:31] will retry after 1.624197287s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:22.385524  214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 23:14:22.479403  214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:22.479436  214436 retry.go:31] will retry after 1.355295288s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:22.817659  214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1209 23:14:22.908908  214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:22.908989  214436 retry.go:31] will retry after 691.827429ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:22.986375  214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 23:14:23.068432  214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:23.068506  214436 retry.go:31] will retry after 1.571617906s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:23.601528  214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1209 23:14:23.711313  214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:23.711349  214436 retry.go:31] will retry after 2.560422129s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:23.805630  214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 23:14:23.835010  214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 23:14:23.922543  214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:23.922579  214436 retry.go:31] will retry after 1.632331853s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1209 23:14:23.962038  214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:23.962074  214436 retry.go:31] will retry after 1.584214603s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:24.373654  214436 node_ready.go:53] error getting node "old-k8s-version-098617": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-098617": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 23:14:24.640407  214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 23:14:24.739085  214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:24.739116  214436 retry.go:31] will retry after 1.912935358s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:25.548183  214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 23:14:25.555017  214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 23:14:25.791670  214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:25.791699  214436 retry.go:31] will retry after 3.115417501s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1209 23:14:25.861829  214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:25.861860  214436 retry.go:31] will retry after 3.058376551s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:26.272603  214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1209 23:14:26.457643  214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:26.457670  214436 retry.go:31] will retry after 3.34728315s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:26.653032  214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 23:14:26.754662  214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:26.754697  214436 retry.go:31] will retry after 2.297633304s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1209 23:14:26.874171  214436 node_ready.go:53] error getting node "old-k8s-version-098617": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-098617": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 23:14:28.908250  214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 23:14:28.920581  214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 23:14:29.053004  214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 23:14:29.805946  214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 23:14:39.672129  214436 node_ready.go:49] node "old-k8s-version-098617" has status "Ready":"True"
	I1209 23:14:39.672159  214436 node_ready.go:38] duration metric: took 19.799350354s for node "old-k8s-version-098617" to be "Ready" ...
	I1209 23:14:39.672170  214436 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 23:14:40.109927  214436 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-tz959" in "kube-system" namespace to be "Ready" ...
	I1209 23:14:40.540131  214436 pod_ready.go:93] pod "coredns-74ff55c5b-tz959" in "kube-system" namespace has status "Ready":"True"
	I1209 23:14:40.540164  214436 pod_ready.go:82] duration metric: took 430.192557ms for pod "coredns-74ff55c5b-tz959" in "kube-system" namespace to be "Ready" ...
	I1209 23:14:40.540175  214436 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-098617" in "kube-system" namespace to be "Ready" ...
	I1209 23:14:40.625483  214436 pod_ready.go:93] pod "etcd-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"True"
	I1209 23:14:40.625508  214436 pod_ready.go:82] duration metric: took 85.324741ms for pod "etcd-old-k8s-version-098617" in "kube-system" namespace to be "Ready" ...
	I1209 23:14:40.625522  214436 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-098617" in "kube-system" namespace to be "Ready" ...
	I1209 23:14:42.663549  214436 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
	I1209 23:14:44.344465  214436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (15.423842724s)
	I1209 23:14:44.344719  214436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (15.43643118s)
	I1209 23:14:44.344841  214436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (15.291810421s)
	I1209 23:14:44.344924  214436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (14.538951397s)
	I1209 23:14:44.344943  214436 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-098617"
	I1209 23:14:44.346533  214436 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-098617 addons enable metrics-server
	
	I1209 23:14:44.356434  214436 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I1209 23:14:44.357746  214436 addons.go:510] duration metric: took 24.821782147s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I1209 23:14:45.134536  214436 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
	I1209 23:14:47.141549  214436 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
	I1209 23:14:48.132334  214436 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"True"
	I1209 23:14:48.132358  214436 pod_ready.go:82] duration metric: took 7.506827533s for pod "kube-apiserver-old-k8s-version-098617" in "kube-system" namespace to be "Ready" ...
	I1209 23:14:48.132369  214436 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace to be "Ready" ...
	I1209 23:14:50.139639  214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
	I1209 23:14:52.639558  214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
	I1209 23:14:55.139828  214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
	I1209 23:14:57.639611  214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
	I1209 23:15:00.171421  214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
	I1209 23:15:02.639287  214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
	I1209 23:15:04.640717  214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
	I1209 23:15:07.139672  214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
	I1209 23:15:09.139967  214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
	I1209 23:15:11.638966  214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
	I1209 23:15:13.640854  214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
	I1209 23:15:16.139178  214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
	I1209 23:15:18.139221  214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
	I1209 23:15:20.140329  214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
	I1209 23:15:22.638415  214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
	I1209 23:15:24.639115  214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
	I1209 23:15:27.138364  214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
	I1209 23:15:29.152113  214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
	I1209 23:15:31.638753  214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
	I1209 23:15:33.639162  214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
	I1209 23:15:36.138607  214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
	I1209 23:15:38.139505  214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
	I1209 23:15:40.150739  214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
	I1209 23:15:42.640430  214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
	I1209 23:15:45.139865  214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
	I1209 23:15:47.639320  214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
	I1209 23:15:50.142696  214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
	I1209 23:15:52.644365  214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
	I1209 23:15:55.140057  214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
	I1209 23:15:57.140353  214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
	I1209 23:15:59.638689  214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
	I1209 23:16:01.639005  214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
	I1209 23:16:03.640451  214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
	I1209 23:16:05.638852  214436 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"True"
	I1209 23:16:05.638877  214436 pod_ready.go:82] duration metric: took 1m17.506500215s for pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace to be "Ready" ...
	I1209 23:16:05.638889  214436 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-d8xtk" in "kube-system" namespace to be "Ready" ...
	I1209 23:16:05.644829  214436 pod_ready.go:93] pod "kube-proxy-d8xtk" in "kube-system" namespace has status "Ready":"True"
	I1209 23:16:05.644862  214436 pod_ready.go:82] duration metric: took 5.964955ms for pod "kube-proxy-d8xtk" in "kube-system" namespace to be "Ready" ...
	I1209 23:16:05.644874  214436 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-098617" in "kube-system" namespace to be "Ready" ...
	I1209 23:16:05.657954  214436 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"True"
	I1209 23:16:05.657984  214436 pod_ready.go:82] duration metric: took 13.100835ms for pod "kube-scheduler-old-k8s-version-098617" in "kube-system" namespace to be "Ready" ...
	I1209 23:16:05.657996  214436 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace to be "Ready" ...
	I1209 23:16:07.664327  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:16:09.664535  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:16:11.664874  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:16:14.164375  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:16:16.164554  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:16:18.165003  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:16:20.664505  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:16:23.166856  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:16:25.663280  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:16:27.665953  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:16:30.165014  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:16:32.664603  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:16:35.163546  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:16:37.164621  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:16:39.664548  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:16:41.666072  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:16:44.164556  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:16:46.664774  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:16:49.164694  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:16:51.664221  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:16:54.164755  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:16:56.664332  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:16:58.665784  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:01.170559  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:03.664373  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:05.664755  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:08.177341  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:10.664387  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:12.665123  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:15.165255  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:17.665499  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:20.164857  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:22.664577  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:25.164453  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:27.164567  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:29.165095  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:31.237487  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:33.665522  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:36.164435  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:38.164666  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:40.663796  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:43.165635  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:45.169433  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:47.665053  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:50.164542  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:52.165095  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:54.667298  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:57.164432  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:59.164646  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:01.165481  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:03.669312  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:06.164905  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:08.663983  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:10.664599  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:13.164801  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:15.164914  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:17.165117  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:19.664270  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:21.665473  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:24.164825  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:26.664623  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:29.164503  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:31.164878  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:33.664476  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:35.664757  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:38.165193  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:40.166450  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:42.664073  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:45.165320  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:47.664230  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:49.664901  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:52.176681  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:54.663964  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:57.165662  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:59.664229  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:19:01.664972  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:19:03.667495  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:19:06.165347  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:19:08.663798  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:19:10.664539  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:19:13.164907  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:19:15.665047  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:19:18.164405  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:19:20.165209  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:19:22.225580  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:19:24.664588  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:19:27.164381  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:19:29.165815  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:19:31.664226  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:19:34.165399  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:19:36.664664  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:19:39.164359  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:19:41.664613  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:19:43.664834  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:19:46.165624  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:19:48.663844  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:19:50.664390  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:19:52.665353  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:19:55.164652  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:19:57.165671  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:19:59.664078  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:20:01.665558  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:20:04.163974  214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
	I1209 23:20:05.664675  214436 pod_ready.go:82] duration metric: took 4m0.006664918s for pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace to be "Ready" ...
	E1209 23:20:05.664698  214436 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1209 23:20:05.664709  214436 pod_ready.go:39] duration metric: took 5m25.992527502s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 23:20:05.664724  214436 api_server.go:52] waiting for apiserver process to appear ...
	I1209 23:20:05.664755  214436 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 23:20:05.664809  214436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 23:20:05.716330  214436 cri.go:89] found id: "9d1b42abf41370b358213f4f369435098a79f49c98a0b87c365dbfb7068093df"
	I1209 23:20:05.716349  214436 cri.go:89] found id: "6d1ffef5c3c113df0c1c5643d627d680020df30e159ab4a69ccf738c6f7c09e6"
	I1209 23:20:05.716356  214436 cri.go:89] found id: ""
	I1209 23:20:05.716364  214436 logs.go:282] 2 containers: [9d1b42abf41370b358213f4f369435098a79f49c98a0b87c365dbfb7068093df 6d1ffef5c3c113df0c1c5643d627d680020df30e159ab4a69ccf738c6f7c09e6]
	I1209 23:20:05.716416  214436 ssh_runner.go:195] Run: which crictl
	I1209 23:20:05.720613  214436 ssh_runner.go:195] Run: which crictl
	I1209 23:20:05.724904  214436 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 23:20:05.724971  214436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 23:20:05.794870  214436 cri.go:89] found id: "55743b620c44bd40d4ac5faf5671b922b7011a15211b36e43cf05dad9e0fdbfd"
	I1209 23:20:05.794890  214436 cri.go:89] found id: "063e1c49d2c947a9f19b4fae6421961ea9a67ce263a258ff3303dcc0ab203f1c"
	I1209 23:20:05.794895  214436 cri.go:89] found id: ""
	I1209 23:20:05.794903  214436 logs.go:282] 2 containers: [55743b620c44bd40d4ac5faf5671b922b7011a15211b36e43cf05dad9e0fdbfd 063e1c49d2c947a9f19b4fae6421961ea9a67ce263a258ff3303dcc0ab203f1c]
	I1209 23:20:05.795013  214436 ssh_runner.go:195] Run: which crictl
	I1209 23:20:05.798664  214436 ssh_runner.go:195] Run: which crictl
	I1209 23:20:05.806904  214436 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 23:20:05.806990  214436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 23:20:05.883953  214436 cri.go:89] found id: "dae54bdfe8b6d504e8061ba13c99a15682b561cf9bca5574313dd3097076811b"
	I1209 23:20:05.883974  214436 cri.go:89] found id: "99d9ed2f5b230d0319b2465314133899b5c950239a3b96d9f4feb405f1b18650"
	I1209 23:20:05.883979  214436 cri.go:89] found id: ""
	I1209 23:20:05.883986  214436 logs.go:282] 2 containers: [dae54bdfe8b6d504e8061ba13c99a15682b561cf9bca5574313dd3097076811b 99d9ed2f5b230d0319b2465314133899b5c950239a3b96d9f4feb405f1b18650]
	I1209 23:20:05.884039  214436 ssh_runner.go:195] Run: which crictl
	I1209 23:20:05.888212  214436 ssh_runner.go:195] Run: which crictl
	I1209 23:20:05.892459  214436 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 23:20:05.892527  214436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 23:20:05.952763  214436 cri.go:89] found id: "e3e7eabe1dad882678b1f0f2e7a8cff160d9b5e4146196f53ed8533082a0103e"
	I1209 23:20:05.952780  214436 cri.go:89] found id: "5693d8f440cbbfff0094faddfc750157e27b436336b58b60026d6e4b6afb7c19"
	I1209 23:20:05.952785  214436 cri.go:89] found id: ""
	I1209 23:20:05.952792  214436 logs.go:282] 2 containers: [e3e7eabe1dad882678b1f0f2e7a8cff160d9b5e4146196f53ed8533082a0103e 5693d8f440cbbfff0094faddfc750157e27b436336b58b60026d6e4b6afb7c19]
	I1209 23:20:05.952849  214436 ssh_runner.go:195] Run: which crictl
	I1209 23:20:05.957287  214436 ssh_runner.go:195] Run: which crictl
	I1209 23:20:05.961182  214436 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 23:20:05.961248  214436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 23:20:06.019302  214436 cri.go:89] found id: "3a123be1d317e1e4f654bafa3493726c60356903a91f6e64d7b29782641f2d50"
	I1209 23:20:06.019321  214436 cri.go:89] found id: "a33fca2389d21d809231c03f4d59c7c6edd2b935f0a6bee69e06642b5d121aad"
	I1209 23:20:06.019325  214436 cri.go:89] found id: ""
	I1209 23:20:06.019332  214436 logs.go:282] 2 containers: [3a123be1d317e1e4f654bafa3493726c60356903a91f6e64d7b29782641f2d50 a33fca2389d21d809231c03f4d59c7c6edd2b935f0a6bee69e06642b5d121aad]
	I1209 23:20:06.019393  214436 ssh_runner.go:195] Run: which crictl
	I1209 23:20:06.024900  214436 ssh_runner.go:195] Run: which crictl
	I1209 23:20:06.030763  214436 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 23:20:06.030843  214436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 23:20:06.123188  214436 cri.go:89] found id: "10660454cbd9d4da094cb8f100e7feceef0b146ea7a208113cba972405412cf4"
	I1209 23:20:06.123204  214436 cri.go:89] found id: "f68628204e6f96e082e5b48a6dc631b0a69e7de46bf5da75a9ca3e6911da3442"
	I1209 23:20:06.123209  214436 cri.go:89] found id: ""
	I1209 23:20:06.123215  214436 logs.go:282] 2 containers: [10660454cbd9d4da094cb8f100e7feceef0b146ea7a208113cba972405412cf4 f68628204e6f96e082e5b48a6dc631b0a69e7de46bf5da75a9ca3e6911da3442]
	I1209 23:20:06.123270  214436 ssh_runner.go:195] Run: which crictl
	I1209 23:20:06.138620  214436 ssh_runner.go:195] Run: which crictl
	I1209 23:20:06.145339  214436 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 23:20:06.145447  214436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 23:20:06.212909  214436 cri.go:89] found id: "394606f289ebff6412cd8423f28e2ba1a7918b8e4eac2870a5c1825e8e571eee"
	I1209 23:20:06.212943  214436 cri.go:89] found id: "5125ce4b5b492867a27c0ac7a7b0e99ee7d2c899aba434ce230c8fe5eb273f38"
	I1209 23:20:06.212953  214436 cri.go:89] found id: ""
	I1209 23:20:06.212961  214436 logs.go:282] 2 containers: [394606f289ebff6412cd8423f28e2ba1a7918b8e4eac2870a5c1825e8e571eee 5125ce4b5b492867a27c0ac7a7b0e99ee7d2c899aba434ce230c8fe5eb273f38]
	I1209 23:20:06.213032  214436 ssh_runner.go:195] Run: which crictl
	I1209 23:20:06.218135  214436 ssh_runner.go:195] Run: which crictl
	I1209 23:20:06.222452  214436 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 23:20:06.222542  214436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 23:20:06.274060  214436 cri.go:89] found id: "f5e0a0afceebb969d8da6457fcca1f6b9964499a31fb1842750ccb5a3884ddf2"
	I1209 23:20:06.274088  214436 cri.go:89] found id: ""
	I1209 23:20:06.274096  214436 logs.go:282] 1 containers: [f5e0a0afceebb969d8da6457fcca1f6b9964499a31fb1842750ccb5a3884ddf2]
	I1209 23:20:06.274150  214436 ssh_runner.go:195] Run: which crictl
	I1209 23:20:06.278975  214436 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 23:20:06.279049  214436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 23:20:06.347374  214436 cri.go:89] found id: "5cdaa2e6255fc4d96282bee6cb565b6257b645c8f8ad628144066bc11b36d0aa"
	I1209 23:20:06.347400  214436 cri.go:89] found id: "9c62f2e12bccb234691d9df725b23072f6bf214069ff068aea47352ec6a1ef2d"
	I1209 23:20:06.347405  214436 cri.go:89] found id: ""
	I1209 23:20:06.347413  214436 logs.go:282] 2 containers: [5cdaa2e6255fc4d96282bee6cb565b6257b645c8f8ad628144066bc11b36d0aa 9c62f2e12bccb234691d9df725b23072f6bf214069ff068aea47352ec6a1ef2d]
	I1209 23:20:06.347508  214436 ssh_runner.go:195] Run: which crictl
	I1209 23:20:06.351947  214436 ssh_runner.go:195] Run: which crictl
	I1209 23:20:06.356871  214436 logs.go:123] Gathering logs for kube-controller-manager [f68628204e6f96e082e5b48a6dc631b0a69e7de46bf5da75a9ca3e6911da3442] ...
	I1209 23:20:06.356903  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f68628204e6f96e082e5b48a6dc631b0a69e7de46bf5da75a9ca3e6911da3442"
	I1209 23:20:06.435020  214436 logs.go:123] Gathering logs for container status ...
	I1209 23:20:06.435060  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 23:20:06.509695  214436 logs.go:123] Gathering logs for dmesg ...
	I1209 23:20:06.509772  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 23:20:06.529377  214436 logs.go:123] Gathering logs for kube-proxy [3a123be1d317e1e4f654bafa3493726c60356903a91f6e64d7b29782641f2d50] ...
	I1209 23:20:06.529453  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a123be1d317e1e4f654bafa3493726c60356903a91f6e64d7b29782641f2d50"
	I1209 23:20:06.604733  214436 logs.go:123] Gathering logs for coredns [99d9ed2f5b230d0319b2465314133899b5c950239a3b96d9f4feb405f1b18650] ...
	I1209 23:20:06.604761  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99d9ed2f5b230d0319b2465314133899b5c950239a3b96d9f4feb405f1b18650"
	I1209 23:20:06.657707  214436 logs.go:123] Gathering logs for kube-scheduler [e3e7eabe1dad882678b1f0f2e7a8cff160d9b5e4146196f53ed8533082a0103e] ...
	I1209 23:20:06.657734  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3e7eabe1dad882678b1f0f2e7a8cff160d9b5e4146196f53ed8533082a0103e"
	I1209 23:20:06.726130  214436 logs.go:123] Gathering logs for kube-scheduler [5693d8f440cbbfff0094faddfc750157e27b436336b58b60026d6e4b6afb7c19] ...
	I1209 23:20:06.726159  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5693d8f440cbbfff0094faddfc750157e27b436336b58b60026d6e4b6afb7c19"
	I1209 23:20:06.781721  214436 logs.go:123] Gathering logs for kindnet [394606f289ebff6412cd8423f28e2ba1a7918b8e4eac2870a5c1825e8e571eee] ...
	I1209 23:20:06.781750  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 394606f289ebff6412cd8423f28e2ba1a7918b8e4eac2870a5c1825e8e571eee"
	I1209 23:20:06.847695  214436 logs.go:123] Gathering logs for storage-provisioner [9c62f2e12bccb234691d9df725b23072f6bf214069ff068aea47352ec6a1ef2d] ...
	I1209 23:20:06.847726  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c62f2e12bccb234691d9df725b23072f6bf214069ff068aea47352ec6a1ef2d"
	I1209 23:20:06.894335  214436 logs.go:123] Gathering logs for containerd ...
	I1209 23:20:06.894359  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 23:20:06.977797  214436 logs.go:123] Gathering logs for kubelet ...
	I1209 23:20:06.977838  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 23:20:07.057997  214436 logs.go:138] Found kubelet problem: Dec 09 23:14:42 old-k8s-version-098617 kubelet[661]: E1209 23:14:42.573159     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1209 23:20:07.058238  214436 logs.go:138] Found kubelet problem: Dec 09 23:14:43 old-k8s-version-098617 kubelet[661]: E1209 23:14:43.065876     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:07.060674  214436 logs.go:138] Found kubelet problem: Dec 09 23:15:03 old-k8s-version-098617 kubelet[661]: E1209 23:15:03.237918     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:07.061175  214436 logs.go:138] Found kubelet problem: Dec 09 23:15:04 old-k8s-version-098617 kubelet[661]: E1209 23:15:04.242454     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:07.063972  214436 logs.go:138] Found kubelet problem: Dec 09 23:15:05 old-k8s-version-098617 kubelet[661]: E1209 23:15:05.924038     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1209 23:20:07.064687  214436 logs.go:138] Found kubelet problem: Dec 09 23:15:09 old-k8s-version-098617 kubelet[661]: E1209 23:15:09.067006     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:07.065159  214436 logs.go:138] Found kubelet problem: Dec 09 23:15:14 old-k8s-version-098617 kubelet[661]: E1209 23:15:14.272914     661 pod_workers.go:191] Error syncing pod 5d412ff6-54cf-4dde-b94b-f7cbb9a3a54c ("storage-provisioner_kube-system(5d412ff6-54cf-4dde-b94b-f7cbb9a3a54c)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5d412ff6-54cf-4dde-b94b-f7cbb9a3a54c)"
	W1209 23:20:07.065395  214436 logs.go:138] Found kubelet problem: Dec 09 23:15:17 old-k8s-version-098617 kubelet[661]: E1209 23:15:17.595843     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:07.066323  214436 logs.go:138] Found kubelet problem: Dec 09 23:15:20 old-k8s-version-098617 kubelet[661]: E1209 23:15:20.301079     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:07.066667  214436 logs.go:138] Found kubelet problem: Dec 09 23:15:29 old-k8s-version-098617 kubelet[661]: E1209 23:15:29.067084     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:07.069859  214436 logs.go:138] Found kubelet problem: Dec 09 23:15:30 old-k8s-version-098617 kubelet[661]: E1209 23:15:30.605005     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1209 23:20:07.070661  214436 logs.go:138] Found kubelet problem: Dec 09 23:15:44 old-k8s-version-098617 kubelet[661]: E1209 23:15:44.374051     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:07.070901  214436 logs.go:138] Found kubelet problem: Dec 09 23:15:44 old-k8s-version-098617 kubelet[661]: E1209 23:15:44.595088     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:07.071303  214436 logs.go:138] Found kubelet problem: Dec 09 23:15:49 old-k8s-version-098617 kubelet[661]: E1209 23:15:49.067450     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:07.071533  214436 logs.go:138] Found kubelet problem: Dec 09 23:15:55 old-k8s-version-098617 kubelet[661]: E1209 23:15:55.596194     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:07.071937  214436 logs.go:138] Found kubelet problem: Dec 09 23:16:00 old-k8s-version-098617 kubelet[661]: E1209 23:16:00.594905     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:07.072160  214436 logs.go:138] Found kubelet problem: Dec 09 23:16:09 old-k8s-version-098617 kubelet[661]: E1209 23:16:09.595293     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:07.072591  214436 logs.go:138] Found kubelet problem: Dec 09 23:16:12 old-k8s-version-098617 kubelet[661]: E1209 23:16:12.595368     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:07.075234  214436 logs.go:138] Found kubelet problem: Dec 09 23:16:24 old-k8s-version-098617 kubelet[661]: E1209 23:16:24.603234     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1209 23:20:07.075861  214436 logs.go:138] Found kubelet problem: Dec 09 23:16:28 old-k8s-version-098617 kubelet[661]: E1209 23:16:28.526789     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:07.076267  214436 logs.go:138] Found kubelet problem: Dec 09 23:16:29 old-k8s-version-098617 kubelet[661]: E1209 23:16:29.529315     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:07.076475  214436 logs.go:138] Found kubelet problem: Dec 09 23:16:36 old-k8s-version-098617 kubelet[661]: E1209 23:16:36.595249     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:07.076956  214436 logs.go:138] Found kubelet problem: Dec 09 23:16:40 old-k8s-version-098617 kubelet[661]: E1209 23:16:40.594761     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:07.077146  214436 logs.go:138] Found kubelet problem: Dec 09 23:16:49 old-k8s-version-098617 kubelet[661]: E1209 23:16:49.599764     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:07.077470  214436 logs.go:138] Found kubelet problem: Dec 09 23:16:55 old-k8s-version-098617 kubelet[661]: E1209 23:16:55.595001     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:07.077652  214436 logs.go:138] Found kubelet problem: Dec 09 23:17:02 old-k8s-version-098617 kubelet[661]: E1209 23:17:02.595200     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:07.077975  214436 logs.go:138] Found kubelet problem: Dec 09 23:17:09 old-k8s-version-098617 kubelet[661]: E1209 23:17:09.594689     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:07.078198  214436 logs.go:138] Found kubelet problem: Dec 09 23:17:16 old-k8s-version-098617 kubelet[661]: E1209 23:17:16.595250     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:07.078594  214436 logs.go:138] Found kubelet problem: Dec 09 23:17:22 old-k8s-version-098617 kubelet[661]: E1209 23:17:22.594748     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:07.078921  214436 logs.go:138] Found kubelet problem: Dec 09 23:17:27 old-k8s-version-098617 kubelet[661]: E1209 23:17:27.596051     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:07.079296  214436 logs.go:138] Found kubelet problem: Dec 09 23:17:36 old-k8s-version-098617 kubelet[661]: E1209 23:17:36.594954     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:07.079540  214436 logs.go:138] Found kubelet problem: Dec 09 23:17:39 old-k8s-version-098617 kubelet[661]: E1209 23:17:39.595361     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:07.079899  214436 logs.go:138] Found kubelet problem: Dec 09 23:17:47 old-k8s-version-098617 kubelet[661]: E1209 23:17:47.595830     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:07.082430  214436 logs.go:138] Found kubelet problem: Dec 09 23:17:53 old-k8s-version-098617 kubelet[661]: E1209 23:17:53.605847     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1209 23:20:07.083089  214436 logs.go:138] Found kubelet problem: Dec 09 23:17:58 old-k8s-version-098617 kubelet[661]: E1209 23:17:58.790114     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:07.083442  214436 logs.go:138] Found kubelet problem: Dec 09 23:17:59 old-k8s-version-098617 kubelet[661]: E1209 23:17:59.799377     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:07.083650  214436 logs.go:138] Found kubelet problem: Dec 09 23:18:05 old-k8s-version-098617 kubelet[661]: E1209 23:18:05.595490     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:07.084004  214436 logs.go:138] Found kubelet problem: Dec 09 23:18:12 old-k8s-version-098617 kubelet[661]: E1209 23:18:12.594695     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:07.084210  214436 logs.go:138] Found kubelet problem: Dec 09 23:18:16 old-k8s-version-098617 kubelet[661]: E1209 23:18:16.595137     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:07.084568  214436 logs.go:138] Found kubelet problem: Dec 09 23:18:24 old-k8s-version-098617 kubelet[661]: E1209 23:18:24.595133     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:07.084772  214436 logs.go:138] Found kubelet problem: Dec 09 23:18:30 old-k8s-version-098617 kubelet[661]: E1209 23:18:30.595155     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:07.085121  214436 logs.go:138] Found kubelet problem: Dec 09 23:18:38 old-k8s-version-098617 kubelet[661]: E1209 23:18:38.594827     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:07.085329  214436 logs.go:138] Found kubelet problem: Dec 09 23:18:45 old-k8s-version-098617 kubelet[661]: E1209 23:18:45.599645     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:07.085679  214436 logs.go:138] Found kubelet problem: Dec 09 23:18:52 old-k8s-version-098617 kubelet[661]: E1209 23:18:52.594642     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:07.085883  214436 logs.go:138] Found kubelet problem: Dec 09 23:18:56 old-k8s-version-098617 kubelet[661]: E1209 23:18:56.595179     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:07.086254  214436 logs.go:138] Found kubelet problem: Dec 09 23:19:05 old-k8s-version-098617 kubelet[661]: E1209 23:19:05.595377     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:07.086460  214436 logs.go:138] Found kubelet problem: Dec 09 23:19:09 old-k8s-version-098617 kubelet[661]: E1209 23:19:09.601501     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:07.086828  214436 logs.go:138] Found kubelet problem: Dec 09 23:19:17 old-k8s-version-098617 kubelet[661]: E1209 23:19:17.599504     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:07.087033  214436 logs.go:138] Found kubelet problem: Dec 09 23:19:23 old-k8s-version-098617 kubelet[661]: E1209 23:19:23.595388     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:07.087454  214436 logs.go:138] Found kubelet problem: Dec 09 23:19:28 old-k8s-version-098617 kubelet[661]: E1209 23:19:28.594771     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:07.087676  214436 logs.go:138] Found kubelet problem: Dec 09 23:19:34 old-k8s-version-098617 kubelet[661]: E1209 23:19:34.595081     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:07.088025  214436 logs.go:138] Found kubelet problem: Dec 09 23:19:40 old-k8s-version-098617 kubelet[661]: E1209 23:19:40.595347     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:07.088229  214436 logs.go:138] Found kubelet problem: Dec 09 23:19:48 old-k8s-version-098617 kubelet[661]: E1209 23:19:48.595049     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:07.088583  214436 logs.go:138] Found kubelet problem: Dec 09 23:19:54 old-k8s-version-098617 kubelet[661]: E1209 23:19:54.595330     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:07.088793  214436 logs.go:138] Found kubelet problem: Dec 09 23:20:01 old-k8s-version-098617 kubelet[661]: E1209 23:20:01.601974     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1209 23:20:07.088807  214436 logs.go:123] Gathering logs for describe nodes ...
	I1209 23:20:07.088833  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 23:20:07.280596  214436 logs.go:123] Gathering logs for kube-controller-manager [10660454cbd9d4da094cb8f100e7feceef0b146ea7a208113cba972405412cf4] ...
	I1209 23:20:07.280630  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10660454cbd9d4da094cb8f100e7feceef0b146ea7a208113cba972405412cf4"
	I1209 23:20:07.347243  214436 logs.go:123] Gathering logs for kindnet [5125ce4b5b492867a27c0ac7a7b0e99ee7d2c899aba434ce230c8fe5eb273f38] ...
	I1209 23:20:07.347276  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5125ce4b5b492867a27c0ac7a7b0e99ee7d2c899aba434ce230c8fe5eb273f38"
	I1209 23:20:07.392055  214436 logs.go:123] Gathering logs for kube-apiserver [6d1ffef5c3c113df0c1c5643d627d680020df30e159ab4a69ccf738c6f7c09e6] ...
	I1209 23:20:07.392083  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d1ffef5c3c113df0c1c5643d627d680020df30e159ab4a69ccf738c6f7c09e6"
	I1209 23:20:07.466330  214436 logs.go:123] Gathering logs for etcd [55743b620c44bd40d4ac5faf5671b922b7011a15211b36e43cf05dad9e0fdbfd] ...
	I1209 23:20:07.466359  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55743b620c44bd40d4ac5faf5671b922b7011a15211b36e43cf05dad9e0fdbfd"
	I1209 23:20:07.519621  214436 logs.go:123] Gathering logs for coredns [dae54bdfe8b6d504e8061ba13c99a15682b561cf9bca5574313dd3097076811b] ...
	I1209 23:20:07.519654  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dae54bdfe8b6d504e8061ba13c99a15682b561cf9bca5574313dd3097076811b"
	I1209 23:20:07.568042  214436 logs.go:123] Gathering logs for kube-proxy [a33fca2389d21d809231c03f4d59c7c6edd2b935f0a6bee69e06642b5d121aad] ...
	I1209 23:20:07.568069  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a33fca2389d21d809231c03f4d59c7c6edd2b935f0a6bee69e06642b5d121aad"
	I1209 23:20:07.624539  214436 logs.go:123] Gathering logs for kubernetes-dashboard [f5e0a0afceebb969d8da6457fcca1f6b9964499a31fb1842750ccb5a3884ddf2] ...
	I1209 23:20:07.624622  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5e0a0afceebb969d8da6457fcca1f6b9964499a31fb1842750ccb5a3884ddf2"
	I1209 23:20:07.667855  214436 logs.go:123] Gathering logs for storage-provisioner [5cdaa2e6255fc4d96282bee6cb565b6257b645c8f8ad628144066bc11b36d0aa] ...
	I1209 23:20:07.667933  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cdaa2e6255fc4d96282bee6cb565b6257b645c8f8ad628144066bc11b36d0aa"
	I1209 23:20:07.707464  214436 logs.go:123] Gathering logs for kube-apiserver [9d1b42abf41370b358213f4f369435098a79f49c98a0b87c365dbfb7068093df] ...
	I1209 23:20:07.707545  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d1b42abf41370b358213f4f369435098a79f49c98a0b87c365dbfb7068093df"
	I1209 23:20:07.767587  214436 logs.go:123] Gathering logs for etcd [063e1c49d2c947a9f19b4fae6421961ea9a67ce263a258ff3303dcc0ab203f1c] ...
	I1209 23:20:07.767619  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 063e1c49d2c947a9f19b4fae6421961ea9a67ce263a258ff3303dcc0ab203f1c"
	I1209 23:20:07.815180  214436 out.go:358] Setting ErrFile to fd 2...
	I1209 23:20:07.815206  214436 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 23:20:07.815278  214436 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1209 23:20:07.815294  214436 out.go:270]   Dec 09 23:19:34 old-k8s-version-098617 kubelet[661]: E1209 23:19:34.595081     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Dec 09 23:19:34 old-k8s-version-098617 kubelet[661]: E1209 23:19:34.595081     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:07.815305  214436 out.go:270]   Dec 09 23:19:40 old-k8s-version-098617 kubelet[661]: E1209 23:19:40.595347     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	  Dec 09 23:19:40 old-k8s-version-098617 kubelet[661]: E1209 23:19:40.595347     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:07.815314  214436 out.go:270]   Dec 09 23:19:48 old-k8s-version-098617 kubelet[661]: E1209 23:19:48.595049     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Dec 09 23:19:48 old-k8s-version-098617 kubelet[661]: E1209 23:19:48.595049     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:07.815321  214436 out.go:270]   Dec 09 23:19:54 old-k8s-version-098617 kubelet[661]: E1209 23:19:54.595330     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	  Dec 09 23:19:54 old-k8s-version-098617 kubelet[661]: E1209 23:19:54.595330     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:07.815328  214436 out.go:270]   Dec 09 23:20:01 old-k8s-version-098617 kubelet[661]: E1209 23:20:01.601974     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Dec 09 23:20:01 old-k8s-version-098617 kubelet[661]: E1209 23:20:01.601974     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1209 23:20:07.815335  214436 out.go:358] Setting ErrFile to fd 2...
	I1209 23:20:07.815343  214436 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:20:17.816817  214436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:20:17.829175  214436 api_server.go:72] duration metric: took 5m58.293675473s to wait for apiserver process to appear ...
	I1209 23:20:17.829198  214436 api_server.go:88] waiting for apiserver healthz status ...
	I1209 23:20:17.829236  214436 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 23:20:17.829298  214436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 23:20:17.876769  214436 cri.go:89] found id: "9d1b42abf41370b358213f4f369435098a79f49c98a0b87c365dbfb7068093df"
	I1209 23:20:17.876789  214436 cri.go:89] found id: "6d1ffef5c3c113df0c1c5643d627d680020df30e159ab4a69ccf738c6f7c09e6"
	I1209 23:20:17.876794  214436 cri.go:89] found id: ""
	I1209 23:20:17.876802  214436 logs.go:282] 2 containers: [9d1b42abf41370b358213f4f369435098a79f49c98a0b87c365dbfb7068093df 6d1ffef5c3c113df0c1c5643d627d680020df30e159ab4a69ccf738c6f7c09e6]
	I1209 23:20:17.876858  214436 ssh_runner.go:195] Run: which crictl
	I1209 23:20:17.881224  214436 ssh_runner.go:195] Run: which crictl
	I1209 23:20:17.885015  214436 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 23:20:17.885093  214436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 23:20:17.929102  214436 cri.go:89] found id: "55743b620c44bd40d4ac5faf5671b922b7011a15211b36e43cf05dad9e0fdbfd"
	I1209 23:20:17.929125  214436 cri.go:89] found id: "063e1c49d2c947a9f19b4fae6421961ea9a67ce263a258ff3303dcc0ab203f1c"
	I1209 23:20:17.929131  214436 cri.go:89] found id: ""
	I1209 23:20:17.929139  214436 logs.go:282] 2 containers: [55743b620c44bd40d4ac5faf5671b922b7011a15211b36e43cf05dad9e0fdbfd 063e1c49d2c947a9f19b4fae6421961ea9a67ce263a258ff3303dcc0ab203f1c]
	I1209 23:20:17.929197  214436 ssh_runner.go:195] Run: which crictl
	I1209 23:20:17.933926  214436 ssh_runner.go:195] Run: which crictl
	I1209 23:20:17.938094  214436 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 23:20:17.938162  214436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 23:20:17.977492  214436 cri.go:89] found id: "dae54bdfe8b6d504e8061ba13c99a15682b561cf9bca5574313dd3097076811b"
	I1209 23:20:17.977522  214436 cri.go:89] found id: "99d9ed2f5b230d0319b2465314133899b5c950239a3b96d9f4feb405f1b18650"
	I1209 23:20:17.977527  214436 cri.go:89] found id: ""
	I1209 23:20:17.977534  214436 logs.go:282] 2 containers: [dae54bdfe8b6d504e8061ba13c99a15682b561cf9bca5574313dd3097076811b 99d9ed2f5b230d0319b2465314133899b5c950239a3b96d9f4feb405f1b18650]
	I1209 23:20:17.977590  214436 ssh_runner.go:195] Run: which crictl
	I1209 23:20:17.981288  214436 ssh_runner.go:195] Run: which crictl
	I1209 23:20:17.985261  214436 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 23:20:17.985330  214436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 23:20:18.029284  214436 cri.go:89] found id: "e3e7eabe1dad882678b1f0f2e7a8cff160d9b5e4146196f53ed8533082a0103e"
	I1209 23:20:18.029319  214436 cri.go:89] found id: "5693d8f440cbbfff0094faddfc750157e27b436336b58b60026d6e4b6afb7c19"
	I1209 23:20:18.029325  214436 cri.go:89] found id: ""
	I1209 23:20:18.029332  214436 logs.go:282] 2 containers: [e3e7eabe1dad882678b1f0f2e7a8cff160d9b5e4146196f53ed8533082a0103e 5693d8f440cbbfff0094faddfc750157e27b436336b58b60026d6e4b6afb7c19]
	I1209 23:20:18.029417  214436 ssh_runner.go:195] Run: which crictl
	I1209 23:20:18.033631  214436 ssh_runner.go:195] Run: which crictl
	I1209 23:20:18.037655  214436 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 23:20:18.037755  214436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 23:20:18.081360  214436 cri.go:89] found id: "3a123be1d317e1e4f654bafa3493726c60356903a91f6e64d7b29782641f2d50"
	I1209 23:20:18.081380  214436 cri.go:89] found id: "a33fca2389d21d809231c03f4d59c7c6edd2b935f0a6bee69e06642b5d121aad"
	I1209 23:20:18.081385  214436 cri.go:89] found id: ""
	I1209 23:20:18.081392  214436 logs.go:282] 2 containers: [3a123be1d317e1e4f654bafa3493726c60356903a91f6e64d7b29782641f2d50 a33fca2389d21d809231c03f4d59c7c6edd2b935f0a6bee69e06642b5d121aad]
	I1209 23:20:18.081481  214436 ssh_runner.go:195] Run: which crictl
	I1209 23:20:18.085180  214436 ssh_runner.go:195] Run: which crictl
	I1209 23:20:18.089322  214436 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 23:20:18.089409  214436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 23:20:18.134668  214436 cri.go:89] found id: "10660454cbd9d4da094cb8f100e7feceef0b146ea7a208113cba972405412cf4"
	I1209 23:20:18.134696  214436 cri.go:89] found id: "f68628204e6f96e082e5b48a6dc631b0a69e7de46bf5da75a9ca3e6911da3442"
	I1209 23:20:18.134742  214436 cri.go:89] found id: ""
	I1209 23:20:18.134755  214436 logs.go:282] 2 containers: [10660454cbd9d4da094cb8f100e7feceef0b146ea7a208113cba972405412cf4 f68628204e6f96e082e5b48a6dc631b0a69e7de46bf5da75a9ca3e6911da3442]
	I1209 23:20:18.134813  214436 ssh_runner.go:195] Run: which crictl
	I1209 23:20:18.138670  214436 ssh_runner.go:195] Run: which crictl
	I1209 23:20:18.142891  214436 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 23:20:18.142966  214436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 23:20:18.186936  214436 cri.go:89] found id: "394606f289ebff6412cd8423f28e2ba1a7918b8e4eac2870a5c1825e8e571eee"
	I1209 23:20:18.186957  214436 cri.go:89] found id: "5125ce4b5b492867a27c0ac7a7b0e99ee7d2c899aba434ce230c8fe5eb273f38"
	I1209 23:20:18.186962  214436 cri.go:89] found id: ""
	I1209 23:20:18.186970  214436 logs.go:282] 2 containers: [394606f289ebff6412cd8423f28e2ba1a7918b8e4eac2870a5c1825e8e571eee 5125ce4b5b492867a27c0ac7a7b0e99ee7d2c899aba434ce230c8fe5eb273f38]
	I1209 23:20:18.187033  214436 ssh_runner.go:195] Run: which crictl
	I1209 23:20:18.190761  214436 ssh_runner.go:195] Run: which crictl
	I1209 23:20:18.194279  214436 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 23:20:18.194347  214436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 23:20:18.241192  214436 cri.go:89] found id: "f5e0a0afceebb969d8da6457fcca1f6b9964499a31fb1842750ccb5a3884ddf2"
	I1209 23:20:18.241225  214436 cri.go:89] found id: ""
	I1209 23:20:18.241234  214436 logs.go:282] 1 containers: [f5e0a0afceebb969d8da6457fcca1f6b9964499a31fb1842750ccb5a3884ddf2]
	I1209 23:20:18.241294  214436 ssh_runner.go:195] Run: which crictl
	I1209 23:20:18.245026  214436 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 23:20:18.245114  214436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 23:20:18.288409  214436 cri.go:89] found id: "5cdaa2e6255fc4d96282bee6cb565b6257b645c8f8ad628144066bc11b36d0aa"
	I1209 23:20:18.288432  214436 cri.go:89] found id: "9c62f2e12bccb234691d9df725b23072f6bf214069ff068aea47352ec6a1ef2d"
	I1209 23:20:18.288437  214436 cri.go:89] found id: ""
	I1209 23:20:18.288444  214436 logs.go:282] 2 containers: [5cdaa2e6255fc4d96282bee6cb565b6257b645c8f8ad628144066bc11b36d0aa 9c62f2e12bccb234691d9df725b23072f6bf214069ff068aea47352ec6a1ef2d]
	I1209 23:20:18.288512  214436 ssh_runner.go:195] Run: which crictl
	I1209 23:20:18.292520  214436 ssh_runner.go:195] Run: which crictl
	I1209 23:20:18.296226  214436 logs.go:123] Gathering logs for coredns [99d9ed2f5b230d0319b2465314133899b5c950239a3b96d9f4feb405f1b18650] ...
	I1209 23:20:18.296262  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99d9ed2f5b230d0319b2465314133899b5c950239a3b96d9f4feb405f1b18650"
	I1209 23:20:18.341368  214436 logs.go:123] Gathering logs for kube-proxy [a33fca2389d21d809231c03f4d59c7c6edd2b935f0a6bee69e06642b5d121aad] ...
	I1209 23:20:18.341398  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a33fca2389d21d809231c03f4d59c7c6edd2b935f0a6bee69e06642b5d121aad"
	I1209 23:20:18.382245  214436 logs.go:123] Gathering logs for kube-controller-manager [10660454cbd9d4da094cb8f100e7feceef0b146ea7a208113cba972405412cf4] ...
	I1209 23:20:18.382273  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10660454cbd9d4da094cb8f100e7feceef0b146ea7a208113cba972405412cf4"
	I1209 23:20:18.439435  214436 logs.go:123] Gathering logs for kindnet [394606f289ebff6412cd8423f28e2ba1a7918b8e4eac2870a5c1825e8e571eee] ...
	I1209 23:20:18.439469  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 394606f289ebff6412cd8423f28e2ba1a7918b8e4eac2870a5c1825e8e571eee"
	I1209 23:20:18.486544  214436 logs.go:123] Gathering logs for kindnet [5125ce4b5b492867a27c0ac7a7b0e99ee7d2c899aba434ce230c8fe5eb273f38] ...
	I1209 23:20:18.486572  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5125ce4b5b492867a27c0ac7a7b0e99ee7d2c899aba434ce230c8fe5eb273f38"
	I1209 23:20:18.534401  214436 logs.go:123] Gathering logs for kubernetes-dashboard [f5e0a0afceebb969d8da6457fcca1f6b9964499a31fb1842750ccb5a3884ddf2] ...
	I1209 23:20:18.534429  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5e0a0afceebb969d8da6457fcca1f6b9964499a31fb1842750ccb5a3884ddf2"
	I1209 23:20:18.578254  214436 logs.go:123] Gathering logs for kubelet ...
	I1209 23:20:18.578286  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 23:20:18.641622  214436 logs.go:138] Found kubelet problem: Dec 09 23:14:42 old-k8s-version-098617 kubelet[661]: E1209 23:14:42.573159     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1209 23:20:18.641827  214436 logs.go:138] Found kubelet problem: Dec 09 23:14:43 old-k8s-version-098617 kubelet[661]: E1209 23:14:43.065876     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:18.644121  214436 logs.go:138] Found kubelet problem: Dec 09 23:15:03 old-k8s-version-098617 kubelet[661]: E1209 23:15:03.237918     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:18.644585  214436 logs.go:138] Found kubelet problem: Dec 09 23:15:04 old-k8s-version-098617 kubelet[661]: E1209 23:15:04.242454     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:18.647131  214436 logs.go:138] Found kubelet problem: Dec 09 23:15:05 old-k8s-version-098617 kubelet[661]: E1209 23:15:05.924038     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1209 23:20:18.647801  214436 logs.go:138] Found kubelet problem: Dec 09 23:15:09 old-k8s-version-098617 kubelet[661]: E1209 23:15:09.067006     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:18.648277  214436 logs.go:138] Found kubelet problem: Dec 09 23:15:14 old-k8s-version-098617 kubelet[661]: E1209 23:15:14.272914     661 pod_workers.go:191] Error syncing pod 5d412ff6-54cf-4dde-b94b-f7cbb9a3a54c ("storage-provisioner_kube-system(5d412ff6-54cf-4dde-b94b-f7cbb9a3a54c)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5d412ff6-54cf-4dde-b94b-f7cbb9a3a54c)"
	W1209 23:20:18.648465  214436 logs.go:138] Found kubelet problem: Dec 09 23:15:17 old-k8s-version-098617 kubelet[661]: E1209 23:15:17.595843     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:18.649467  214436 logs.go:138] Found kubelet problem: Dec 09 23:15:20 old-k8s-version-098617 kubelet[661]: E1209 23:15:20.301079     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:18.649799  214436 logs.go:138] Found kubelet problem: Dec 09 23:15:29 old-k8s-version-098617 kubelet[661]: E1209 23:15:29.067084     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:18.652410  214436 logs.go:138] Found kubelet problem: Dec 09 23:15:30 old-k8s-version-098617 kubelet[661]: E1209 23:15:30.605005     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1209 23:20:18.653183  214436 logs.go:138] Found kubelet problem: Dec 09 23:15:44 old-k8s-version-098617 kubelet[661]: E1209 23:15:44.374051     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:18.653375  214436 logs.go:138] Found kubelet problem: Dec 09 23:15:44 old-k8s-version-098617 kubelet[661]: E1209 23:15:44.595088     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:18.653709  214436 logs.go:138] Found kubelet problem: Dec 09 23:15:49 old-k8s-version-098617 kubelet[661]: E1209 23:15:49.067450     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:18.653897  214436 logs.go:138] Found kubelet problem: Dec 09 23:15:55 old-k8s-version-098617 kubelet[661]: E1209 23:15:55.596194     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:18.654224  214436 logs.go:138] Found kubelet problem: Dec 09 23:16:00 old-k8s-version-098617 kubelet[661]: E1209 23:16:00.594905     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:18.654408  214436 logs.go:138] Found kubelet problem: Dec 09 23:16:09 old-k8s-version-098617 kubelet[661]: E1209 23:16:09.595293     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:18.654770  214436 logs.go:138] Found kubelet problem: Dec 09 23:16:12 old-k8s-version-098617 kubelet[661]: E1209 23:16:12.595368     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:18.657216  214436 logs.go:138] Found kubelet problem: Dec 09 23:16:24 old-k8s-version-098617 kubelet[661]: E1209 23:16:24.603234     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1209 23:20:18.657805  214436 logs.go:138] Found kubelet problem: Dec 09 23:16:28 old-k8s-version-098617 kubelet[661]: E1209 23:16:28.526789     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:18.658134  214436 logs.go:138] Found kubelet problem: Dec 09 23:16:29 old-k8s-version-098617 kubelet[661]: E1209 23:16:29.529315     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:18.658317  214436 logs.go:138] Found kubelet problem: Dec 09 23:16:36 old-k8s-version-098617 kubelet[661]: E1209 23:16:36.595249     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:18.658658  214436 logs.go:138] Found kubelet problem: Dec 09 23:16:40 old-k8s-version-098617 kubelet[661]: E1209 23:16:40.594761     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:18.658854  214436 logs.go:138] Found kubelet problem: Dec 09 23:16:49 old-k8s-version-098617 kubelet[661]: E1209 23:16:49.599764     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:18.659185  214436 logs.go:138] Found kubelet problem: Dec 09 23:16:55 old-k8s-version-098617 kubelet[661]: E1209 23:16:55.595001     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:18.659368  214436 logs.go:138] Found kubelet problem: Dec 09 23:17:02 old-k8s-version-098617 kubelet[661]: E1209 23:17:02.595200     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:18.659698  214436 logs.go:138] Found kubelet problem: Dec 09 23:17:09 old-k8s-version-098617 kubelet[661]: E1209 23:17:09.594689     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:18.659882  214436 logs.go:138] Found kubelet problem: Dec 09 23:17:16 old-k8s-version-098617 kubelet[661]: E1209 23:17:16.595250     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:18.660209  214436 logs.go:138] Found kubelet problem: Dec 09 23:17:22 old-k8s-version-098617 kubelet[661]: E1209 23:17:22.594748     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:18.660394  214436 logs.go:138] Found kubelet problem: Dec 09 23:17:27 old-k8s-version-098617 kubelet[661]: E1209 23:17:27.596051     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:18.660719  214436 logs.go:138] Found kubelet problem: Dec 09 23:17:36 old-k8s-version-098617 kubelet[661]: E1209 23:17:36.594954     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:18.660901  214436 logs.go:138] Found kubelet problem: Dec 09 23:17:39 old-k8s-version-098617 kubelet[661]: E1209 23:17:39.595361     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:18.661251  214436 logs.go:138] Found kubelet problem: Dec 09 23:17:47 old-k8s-version-098617 kubelet[661]: E1209 23:17:47.595830     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:18.663825  214436 logs.go:138] Found kubelet problem: Dec 09 23:17:53 old-k8s-version-098617 kubelet[661]: E1209 23:17:53.605847     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1209 23:20:18.664420  214436 logs.go:138] Found kubelet problem: Dec 09 23:17:58 old-k8s-version-098617 kubelet[661]: E1209 23:17:58.790114     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:18.664751  214436 logs.go:138] Found kubelet problem: Dec 09 23:17:59 old-k8s-version-098617 kubelet[661]: E1209 23:17:59.799377     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:18.664937  214436 logs.go:138] Found kubelet problem: Dec 09 23:18:05 old-k8s-version-098617 kubelet[661]: E1209 23:18:05.595490     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:18.665261  214436 logs.go:138] Found kubelet problem: Dec 09 23:18:12 old-k8s-version-098617 kubelet[661]: E1209 23:18:12.594695     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:18.665446  214436 logs.go:138] Found kubelet problem: Dec 09 23:18:16 old-k8s-version-098617 kubelet[661]: E1209 23:18:16.595137     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:18.665770  214436 logs.go:138] Found kubelet problem: Dec 09 23:18:24 old-k8s-version-098617 kubelet[661]: E1209 23:18:24.595133     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:18.665954  214436 logs.go:138] Found kubelet problem: Dec 09 23:18:30 old-k8s-version-098617 kubelet[661]: E1209 23:18:30.595155     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:18.666280  214436 logs.go:138] Found kubelet problem: Dec 09 23:18:38 old-k8s-version-098617 kubelet[661]: E1209 23:18:38.594827     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:18.666467  214436 logs.go:138] Found kubelet problem: Dec 09 23:18:45 old-k8s-version-098617 kubelet[661]: E1209 23:18:45.599645     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:18.666860  214436 logs.go:138] Found kubelet problem: Dec 09 23:18:52 old-k8s-version-098617 kubelet[661]: E1209 23:18:52.594642     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:18.667045  214436 logs.go:138] Found kubelet problem: Dec 09 23:18:56 old-k8s-version-098617 kubelet[661]: E1209 23:18:56.595179     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:18.667374  214436 logs.go:138] Found kubelet problem: Dec 09 23:19:05 old-k8s-version-098617 kubelet[661]: E1209 23:19:05.595377     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:18.667717  214436 logs.go:138] Found kubelet problem: Dec 09 23:19:09 old-k8s-version-098617 kubelet[661]: E1209 23:19:09.601501     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:18.668122  214436 logs.go:138] Found kubelet problem: Dec 09 23:19:17 old-k8s-version-098617 kubelet[661]: E1209 23:19:17.599504     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:18.668344  214436 logs.go:138] Found kubelet problem: Dec 09 23:19:23 old-k8s-version-098617 kubelet[661]: E1209 23:19:23.595388     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:18.668726  214436 logs.go:138] Found kubelet problem: Dec 09 23:19:28 old-k8s-version-098617 kubelet[661]: E1209 23:19:28.594771     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:18.668945  214436 logs.go:138] Found kubelet problem: Dec 09 23:19:34 old-k8s-version-098617 kubelet[661]: E1209 23:19:34.595081     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:18.669302  214436 logs.go:138] Found kubelet problem: Dec 09 23:19:40 old-k8s-version-098617 kubelet[661]: E1209 23:19:40.595347     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:18.669492  214436 logs.go:138] Found kubelet problem: Dec 09 23:19:48 old-k8s-version-098617 kubelet[661]: E1209 23:19:48.595049     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:18.669823  214436 logs.go:138] Found kubelet problem: Dec 09 23:19:54 old-k8s-version-098617 kubelet[661]: E1209 23:19:54.595330     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:18.670007  214436 logs.go:138] Found kubelet problem: Dec 09 23:20:01 old-k8s-version-098617 kubelet[661]: E1209 23:20:01.601974     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:18.670332  214436 logs.go:138] Found kubelet problem: Dec 09 23:20:09 old-k8s-version-098617 kubelet[661]: E1209 23:20:09.599956     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:18.670536  214436 logs.go:138] Found kubelet problem: Dec 09 23:20:12 old-k8s-version-098617 kubelet[661]: E1209 23:20:12.596822     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1209 23:20:18.670551  214436 logs.go:123] Gathering logs for kube-apiserver [9d1b42abf41370b358213f4f369435098a79f49c98a0b87c365dbfb7068093df] ...
	I1209 23:20:18.670566  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d1b42abf41370b358213f4f369435098a79f49c98a0b87c365dbfb7068093df"
	I1209 23:20:18.726336  214436 logs.go:123] Gathering logs for coredns [dae54bdfe8b6d504e8061ba13c99a15682b561cf9bca5574313dd3097076811b] ...
	I1209 23:20:18.726367  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dae54bdfe8b6d504e8061ba13c99a15682b561cf9bca5574313dd3097076811b"
	I1209 23:20:18.772894  214436 logs.go:123] Gathering logs for kube-proxy [3a123be1d317e1e4f654bafa3493726c60356903a91f6e64d7b29782641f2d50] ...
	I1209 23:20:18.772924  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a123be1d317e1e4f654bafa3493726c60356903a91f6e64d7b29782641f2d50"
	I1209 23:20:18.811216  214436 logs.go:123] Gathering logs for storage-provisioner [5cdaa2e6255fc4d96282bee6cb565b6257b645c8f8ad628144066bc11b36d0aa] ...
	I1209 23:20:18.811248  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cdaa2e6255fc4d96282bee6cb565b6257b645c8f8ad628144066bc11b36d0aa"
	I1209 23:20:18.863382  214436 logs.go:123] Gathering logs for dmesg ...
	I1209 23:20:18.863410  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 23:20:18.878963  214436 logs.go:123] Gathering logs for kube-apiserver [6d1ffef5c3c113df0c1c5643d627d680020df30e159ab4a69ccf738c6f7c09e6] ...
	I1209 23:20:18.878998  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d1ffef5c3c113df0c1c5643d627d680020df30e159ab4a69ccf738c6f7c09e6"
	I1209 23:20:18.955508  214436 logs.go:123] Gathering logs for kube-scheduler [5693d8f440cbbfff0094faddfc750157e27b436336b58b60026d6e4b6afb7c19] ...
	I1209 23:20:18.955543  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5693d8f440cbbfff0094faddfc750157e27b436336b58b60026d6e4b6afb7c19"
	I1209 23:20:18.996932  214436 logs.go:123] Gathering logs for kube-controller-manager [f68628204e6f96e082e5b48a6dc631b0a69e7de46bf5da75a9ca3e6911da3442] ...
	I1209 23:20:18.996961  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f68628204e6f96e082e5b48a6dc631b0a69e7de46bf5da75a9ca3e6911da3442"
	I1209 23:20:19.055585  214436 logs.go:123] Gathering logs for storage-provisioner [9c62f2e12bccb234691d9df725b23072f6bf214069ff068aea47352ec6a1ef2d] ...
	I1209 23:20:19.055622  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c62f2e12bccb234691d9df725b23072f6bf214069ff068aea47352ec6a1ef2d"
	I1209 23:20:19.101889  214436 logs.go:123] Gathering logs for etcd [063e1c49d2c947a9f19b4fae6421961ea9a67ce263a258ff3303dcc0ab203f1c] ...
	I1209 23:20:19.101918  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 063e1c49d2c947a9f19b4fae6421961ea9a67ce263a258ff3303dcc0ab203f1c"
	I1209 23:20:19.152002  214436 logs.go:123] Gathering logs for kube-scheduler [e3e7eabe1dad882678b1f0f2e7a8cff160d9b5e4146196f53ed8533082a0103e] ...
	I1209 23:20:19.152031  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3e7eabe1dad882678b1f0f2e7a8cff160d9b5e4146196f53ed8533082a0103e"
	I1209 23:20:19.191323  214436 logs.go:123] Gathering logs for containerd ...
	I1209 23:20:19.191365  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 23:20:19.257897  214436 logs.go:123] Gathering logs for container status ...
	I1209 23:20:19.257932  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 23:20:19.309335  214436 logs.go:123] Gathering logs for describe nodes ...
	I1209 23:20:19.309363  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 23:20:19.459995  214436 logs.go:123] Gathering logs for etcd [55743b620c44bd40d4ac5faf5671b922b7011a15211b36e43cf05dad9e0fdbfd] ...
	I1209 23:20:19.460028  214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55743b620c44bd40d4ac5faf5671b922b7011a15211b36e43cf05dad9e0fdbfd"
	I1209 23:20:19.508050  214436 out.go:358] Setting ErrFile to fd 2...
	I1209 23:20:19.508080  214436 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 23:20:19.508160  214436 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1209 23:20:19.508177  214436 out.go:270]   Dec 09 23:19:48 old-k8s-version-098617 kubelet[661]: E1209 23:19:48.595049     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Dec 09 23:19:48 old-k8s-version-098617 kubelet[661]: E1209 23:19:48.595049     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:19.508200  214436 out.go:270]   Dec 09 23:19:54 old-k8s-version-098617 kubelet[661]: E1209 23:19:54.595330     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	  Dec 09 23:19:54 old-k8s-version-098617 kubelet[661]: E1209 23:19:54.595330     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:19.508214  214436 out.go:270]   Dec 09 23:20:01 old-k8s-version-098617 kubelet[661]: E1209 23:20:01.601974     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Dec 09 23:20:01 old-k8s-version-098617 kubelet[661]: E1209 23:20:01.601974     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1209 23:20:19.508220  214436 out.go:270]   Dec 09 23:20:09 old-k8s-version-098617 kubelet[661]: E1209 23:20:09.599956     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	  Dec 09 23:20:09 old-k8s-version-098617 kubelet[661]: E1209 23:20:09.599956     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	W1209 23:20:19.508239  214436 out.go:270]   Dec 09 23:20:12 old-k8s-version-098617 kubelet[661]: E1209 23:20:12.596822     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Dec 09 23:20:12 old-k8s-version-098617 kubelet[661]: E1209 23:20:12.596822     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1209 23:20:19.508273  214436 out.go:358] Setting ErrFile to fd 2...
	I1209 23:20:19.508279  214436 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:20:29.510549  214436 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 23:20:29.524882  214436 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1209 23:20:29.528146  214436 out.go:201] 
	W1209 23:20:29.530871  214436 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W1209 23:20:29.530913  214436 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W1209 23:20:29.530934  214436 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W1209 23:20:29.530940  214436 out.go:270] * 
	* 
	W1209 23:20:29.531888  214436 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 23:20:29.534511  214436 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-098617 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-098617
helpers_test.go:235: (dbg) docker inspect old-k8s-version-098617:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "57f412c304cdcbf8e565c298234fb1bde21c07bb763667ba3ada7b85e7c9515b",
	        "Created": "2024-12-09T23:11:15.751274578Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 214659,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-12-09T23:14:11.847036432Z",
	            "FinishedAt": "2024-12-09T23:14:10.741964012Z"
	        },
	        "Image": "sha256:51526bd7c0894c18bc1ef50650a0aaaea3bed24f70f72f77ac668ae72dfff137",
	        "ResolvConfPath": "/var/lib/docker/containers/57f412c304cdcbf8e565c298234fb1bde21c07bb763667ba3ada7b85e7c9515b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/57f412c304cdcbf8e565c298234fb1bde21c07bb763667ba3ada7b85e7c9515b/hostname",
	        "HostsPath": "/var/lib/docker/containers/57f412c304cdcbf8e565c298234fb1bde21c07bb763667ba3ada7b85e7c9515b/hosts",
	        "LogPath": "/var/lib/docker/containers/57f412c304cdcbf8e565c298234fb1bde21c07bb763667ba3ada7b85e7c9515b/57f412c304cdcbf8e565c298234fb1bde21c07bb763667ba3ada7b85e7c9515b-json.log",
	        "Name": "/old-k8s-version-098617",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-098617:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-098617",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b29bcf2165f013f78da042422e709637844eaa3274ebb164f571f14a16d0892f-init/diff:/var/lib/docker/overlay2/6cfa97401e314435cf365c42eba2c46d097e4b7837b825b4a08546b8c35c8dc6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b29bcf2165f013f78da042422e709637844eaa3274ebb164f571f14a16d0892f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b29bcf2165f013f78da042422e709637844eaa3274ebb164f571f14a16d0892f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b29bcf2165f013f78da042422e709637844eaa3274ebb164f571f14a16d0892f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-098617",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-098617/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-098617",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-098617",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-098617",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f23de1d61de65bb9b88f9be54321ec1d0391ac841fd142a2f92b88d1e97aa40b",
	            "SandboxKey": "/var/run/docker/netns/f23de1d61de6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-098617": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "bc0947319246c73a3e3ae762238cdf8952fd9005098fc7272274c70a84c92d4d",
	                    "EndpointID": "07754624237b3db22b03fe89e5ec978786d0bc29936d95bf18fbfcfe8eab1e60",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-098617",
	                        "57f412c304cd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-098617 -n old-k8s-version-098617
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-098617 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-098617 logs -n 25: (2.843181288s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-521962                              | cert-expiration-521962   | jenkins | v1.34.0 | 09 Dec 24 23:10 UTC | 09 Dec 24 23:10 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | force-systemd-env-786239                               | force-systemd-env-786239 | jenkins | v1.34.0 | 09 Dec 24 23:10 UTC | 09 Dec 24 23:10 UTC |
	|         | ssh cat                                                |                          |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                          |         |         |                     |                     |
	| delete  | -p force-systemd-env-786239                            | force-systemd-env-786239 | jenkins | v1.34.0 | 09 Dec 24 23:10 UTC | 09 Dec 24 23:10 UTC |
	| start   | -p cert-options-171060                                 | cert-options-171060      | jenkins | v1.34.0 | 09 Dec 24 23:10 UTC | 09 Dec 24 23:11 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                          |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                          |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                          |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                          |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | cert-options-171060 ssh                                | cert-options-171060      | jenkins | v1.34.0 | 09 Dec 24 23:11 UTC | 09 Dec 24 23:11 UTC |
	|         | openssl x509 -text -noout -in                          |                          |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                          |         |         |                     |                     |
	| ssh     | -p cert-options-171060 -- sudo                         | cert-options-171060      | jenkins | v1.34.0 | 09 Dec 24 23:11 UTC | 09 Dec 24 23:11 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                          |         |         |                     |                     |
	| delete  | -p cert-options-171060                                 | cert-options-171060      | jenkins | v1.34.0 | 09 Dec 24 23:11 UTC | 09 Dec 24 23:11 UTC |
	| start   | -p old-k8s-version-098617                              | old-k8s-version-098617   | jenkins | v1.34.0 | 09 Dec 24 23:11 UTC | 09 Dec 24 23:13 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| start   | -p cert-expiration-521962                              | cert-expiration-521962   | jenkins | v1.34.0 | 09 Dec 24 23:13 UTC | 09 Dec 24 23:14 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-098617        | old-k8s-version-098617   | jenkins | v1.34.0 | 09 Dec 24 23:13 UTC | 09 Dec 24 23:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p old-k8s-version-098617                              | old-k8s-version-098617   | jenkins | v1.34.0 | 09 Dec 24 23:13 UTC | 09 Dec 24 23:14 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| delete  | -p cert-expiration-521962                              | cert-expiration-521962   | jenkins | v1.34.0 | 09 Dec 24 23:14 UTC | 09 Dec 24 23:14 UTC |
	| start   | -p no-preload-548785                                   | no-preload-548785        | jenkins | v1.34.0 | 09 Dec 24 23:14 UTC | 09 Dec 24 23:15 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                          |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-098617             | old-k8s-version-098617   | jenkins | v1.34.0 | 09 Dec 24 23:14 UTC | 09 Dec 24 23:14 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p old-k8s-version-098617                              | old-k8s-version-098617   | jenkins | v1.34.0 | 09 Dec 24 23:14 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-548785             | no-preload-548785        | jenkins | v1.34.0 | 09 Dec 24 23:15 UTC | 09 Dec 24 23:15 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p no-preload-548785                                   | no-preload-548785        | jenkins | v1.34.0 | 09 Dec 24 23:15 UTC | 09 Dec 24 23:15 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-548785                  | no-preload-548785        | jenkins | v1.34.0 | 09 Dec 24 23:15 UTC | 09 Dec 24 23:15 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p no-preload-548785                                   | no-preload-548785        | jenkins | v1.34.0 | 09 Dec 24 23:15 UTC | 09 Dec 24 23:20 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                          |         |         |                     |                     |
	| image   | no-preload-548785 image list                           | no-preload-548785        | jenkins | v1.34.0 | 09 Dec 24 23:20 UTC | 09 Dec 24 23:20 UTC |
	|         | --format=json                                          |                          |         |         |                     |                     |
	| pause   | -p no-preload-548785                                   | no-preload-548785        | jenkins | v1.34.0 | 09 Dec 24 23:20 UTC | 09 Dec 24 23:20 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| unpause | -p no-preload-548785                                   | no-preload-548785        | jenkins | v1.34.0 | 09 Dec 24 23:20 UTC | 09 Dec 24 23:20 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| delete  | -p no-preload-548785                                   | no-preload-548785        | jenkins | v1.34.0 | 09 Dec 24 23:20 UTC | 09 Dec 24 23:20 UTC |
	| delete  | -p no-preload-548785                                   | no-preload-548785        | jenkins | v1.34.0 | 09 Dec 24 23:20 UTC | 09 Dec 24 23:20 UTC |
	| start   | -p embed-certs-744076                                  | embed-certs-744076       | jenkins | v1.34.0 | 09 Dec 24 23:20 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                          |         |         |                     |                     |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 23:20:27
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.23.2 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 23:20:27.869402  226785 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:20:27.871169  226785 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:20:27.871222  226785 out.go:358] Setting ErrFile to fd 2...
	I1209 23:20:27.871245  226785 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:20:27.871617  226785 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-2244/.minikube/bin
	I1209 23:20:27.872280  226785 out.go:352] Setting JSON to false
	I1209 23:20:27.873364  226785 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3775,"bootTime":1733782653,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1209 23:20:27.873545  226785 start.go:139] virtualization:  
	I1209 23:20:27.875444  226785 out.go:177] * [embed-certs-744076] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1209 23:20:27.878885  226785 out.go:177]   - MINIKUBE_LOCATION=19888
	I1209 23:20:27.879016  226785 notify.go:220] Checking for updates...
	I1209 23:20:27.881108  226785 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 23:20:27.882414  226785 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19888-2244/kubeconfig
	I1209 23:20:27.884230  226785 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-2244/.minikube
	I1209 23:20:27.887617  226785 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 23:20:27.889115  226785 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 23:20:27.890817  226785 config.go:182] Loaded profile config "old-k8s-version-098617": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1209 23:20:27.890913  226785 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 23:20:27.937643  226785 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1209 23:20:27.937767  226785 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 23:20:28.005266  226785 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-09 23:20:27.996099755 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1209 23:20:28.005381  226785 docker.go:318] overlay module found
	I1209 23:20:28.011614  226785 out.go:177] * Using the docker driver based on user configuration
	I1209 23:20:28.012938  226785 start.go:297] selected driver: docker
	I1209 23:20:28.012968  226785 start.go:901] validating driver "docker" against <nil>
	I1209 23:20:28.012985  226785 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 23:20:28.013913  226785 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 23:20:28.088716  226785 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-09 23:20:28.078901632 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1209 23:20:28.088927  226785 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 23:20:28.089170  226785 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 23:20:28.090868  226785 out.go:177] * Using Docker driver with root privileges
	I1209 23:20:28.092522  226785 cni.go:84] Creating CNI manager for ""
	I1209 23:20:28.092597  226785 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 23:20:28.092611  226785 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 23:20:28.092696  226785 start.go:340] cluster config:
	{Name:embed-certs-744076 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-744076 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:20:28.094128  226785 out.go:177] * Starting "embed-certs-744076" primary control-plane node in "embed-certs-744076" cluster
	I1209 23:20:28.095686  226785 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1209 23:20:28.097465  226785 out.go:177] * Pulling base image v0.0.45-1730888964-19917 ...
	I1209 23:20:28.098942  226785 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
	I1209 23:20:28.099005  226785 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19888-2244/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4
	I1209 23:20:28.099013  226785 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1209 23:20:28.099033  226785 cache.go:56] Caching tarball of preloaded images
	I1209 23:20:28.099118  226785 preload.go:172] Found /home/jenkins/minikube-integration/19888-2244/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 23:20:28.099128  226785 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on containerd
	I1209 23:20:28.099234  226785 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/embed-certs-744076/config.json ...
	I1209 23:20:28.099251  226785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/embed-certs-744076/config.json: {Name:mk841e42c3bc2d87c19bc50e9458d984fbc41d39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:20:28.120814  226785 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon, skipping pull
	I1209 23:20:28.120834  226785 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in daemon, skipping load
	I1209 23:20:28.120853  226785 cache.go:194] Successfully downloaded all kic artifacts
	I1209 23:20:28.120883  226785 start.go:360] acquireMachinesLock for embed-certs-744076: {Name:mkca6141fc0cb8d284cb727d6174977d87cddf09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 23:20:28.120993  226785 start.go:364] duration metric: took 86.604µs to acquireMachinesLock for "embed-certs-744076"
	I1209 23:20:28.121020  226785 start.go:93] Provisioning new machine with config: &{Name:embed-certs-744076 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-744076 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1209 23:20:28.121175  226785 start.go:125] createHost starting for "" (driver="docker")
	I1209 23:20:29.510549  214436 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 23:20:29.524882  214436 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1209 23:20:29.528146  214436 out.go:201] 
	W1209 23:20:29.530871  214436 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W1209 23:20:29.530913  214436 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W1209 23:20:29.530934  214436 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W1209 23:20:29.530940  214436 out.go:270] * 
	W1209 23:20:29.531888  214436 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 23:20:29.534511  214436 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	26b3428efb807       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   877abb70d5e3f       dashboard-metrics-scraper-8d5bb5db8-hmfdq
	5cdaa2e6255fc       ba04bb24b9575       5 minutes ago       Running             storage-provisioner         3                   18a6972ae3661       storage-provisioner
	f5e0a0afceebb       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   59bea17c71ea1       kubernetes-dashboard-cd95d586-9w2zp
	f77b8039910f9       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   bcc333eda2a9d       busybox
	dae54bdfe8b6d       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   698bb49131392       coredns-74ff55c5b-tz959
	9c62f2e12bccb       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         2                   18a6972ae3661       storage-provisioner
	3a123be1d317e       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   ffa2eadb85d0e       kube-proxy-d8xtk
	394606f289ebf       2be0bcf609c65       5 minutes ago       Running             kindnet-cni                 1                   5c34aae934876       kindnet-8g8xl
	9d1b42abf4137       2c08bbbc02d3a       6 minutes ago       Running             kube-apiserver              1                   5eca194230e16       kube-apiserver-old-k8s-version-098617
	10660454cbd9d       1df8a2b116bd1       6 minutes ago       Running             kube-controller-manager     1                   76b6bd2de3b68       kube-controller-manager-old-k8s-version-098617
	e3e7eabe1dad8       e7605f88f17d6       6 minutes ago       Running             kube-scheduler              1                   a19a7fb23b0ed       kube-scheduler-old-k8s-version-098617
	55743b620c44b       05b738aa1bc63       6 minutes ago       Running             etcd                        1                   0d636fcae49c7       etcd-old-k8s-version-098617
	d39cebd99127e       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   7dad798c84a92       busybox
	99d9ed2f5b230       db91994f4ee8f       8 minutes ago       Exited              coredns                     0                   4f6103318c3c0       coredns-74ff55c5b-tz959
	5125ce4b5b492       2be0bcf609c65       8 minutes ago       Exited              kindnet-cni                 0                   cbb7820bf820e       kindnet-8g8xl
	a33fca2389d21       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   cf2ee86c3f32c       kube-proxy-d8xtk
	5693d8f440cbb       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   d51e2f1483e9b       kube-scheduler-old-k8s-version-098617
	063e1c49d2c94       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   5fd7016bca415       etcd-old-k8s-version-098617
	f68628204e6f9       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   07b952f17d5ef       kube-controller-manager-old-k8s-version-098617
	6d1ffef5c3c11       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   84d5b210b79df       kube-apiserver-old-k8s-version-098617
	
	
	==> containerd <==
	Dec 09 23:16:24 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:16:24.602539647Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Dec 09 23:16:24 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:16:24.602592537Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Dec 09 23:16:27 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:16:27.598872230Z" level=info msg="CreateContainer within sandbox \"877abb70d5e3f16a0ef459d179dcfdfa8cbd958892fbaa225454017f2baf0042\" for container name:\"dashboard-metrics-scraper\"  attempt:4"
	Dec 09 23:16:27 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:16:27.622268230Z" level=info msg="CreateContainer within sandbox \"877abb70d5e3f16a0ef459d179dcfdfa8cbd958892fbaa225454017f2baf0042\" for name:\"dashboard-metrics-scraper\"  attempt:4 returns container id \"da1a0b8be74aa037ece31c8c9b11c9746d4c6edfcc05b6aba6726603a8206080\""
	Dec 09 23:16:27 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:16:27.622946757Z" level=info msg="StartContainer for \"da1a0b8be74aa037ece31c8c9b11c9746d4c6edfcc05b6aba6726603a8206080\""
	Dec 09 23:16:27 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:16:27.726144658Z" level=info msg="StartContainer for \"da1a0b8be74aa037ece31c8c9b11c9746d4c6edfcc05b6aba6726603a8206080\" returns successfully"
	Dec 09 23:16:27 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:16:27.753508481Z" level=info msg="shim disconnected" id=da1a0b8be74aa037ece31c8c9b11c9746d4c6edfcc05b6aba6726603a8206080 namespace=k8s.io
	Dec 09 23:16:27 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:16:27.753848807Z" level=warning msg="cleaning up after shim disconnected" id=da1a0b8be74aa037ece31c8c9b11c9746d4c6edfcc05b6aba6726603a8206080 namespace=k8s.io
	Dec 09 23:16:27 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:16:27.753925400Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Dec 09 23:16:27 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:16:27.765789375Z" level=warning msg="cleanup warnings time=\"2024-12-09T23:16:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
	Dec 09 23:16:28 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:16:28.527480164Z" level=info msg="RemoveContainer for \"6f5e80f40b735d4b9e3bcb78fb0271ab065a486528e08373cbcf421f7d57ec07\""
	Dec 09 23:16:28 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:16:28.532360667Z" level=info msg="RemoveContainer for \"6f5e80f40b735d4b9e3bcb78fb0271ab065a486528e08373cbcf421f7d57ec07\" returns successfully"
	Dec 09 23:17:53 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:17:53.596069074Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 09 23:17:53 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:17:53.603531922Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Dec 09 23:17:53 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:17:53.605212475Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Dec 09 23:17:53 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:17:53.605279181Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Dec 09 23:17:58 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:17:58.596811809Z" level=info msg="CreateContainer within sandbox \"877abb70d5e3f16a0ef459d179dcfdfa8cbd958892fbaa225454017f2baf0042\" for container name:\"dashboard-metrics-scraper\"  attempt:5"
	Dec 09 23:17:58 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:17:58.607227318Z" level=info msg="CreateContainer within sandbox \"877abb70d5e3f16a0ef459d179dcfdfa8cbd958892fbaa225454017f2baf0042\" for name:\"dashboard-metrics-scraper\"  attempt:5 returns container id \"26b3428efb80789472c470c3dfe33d2069cefb3c11164ae469638814f3870a07\""
	Dec 09 23:17:58 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:17:58.608050302Z" level=info msg="StartContainer for \"26b3428efb80789472c470c3dfe33d2069cefb3c11164ae469638814f3870a07\""
	Dec 09 23:17:58 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:17:58.700069142Z" level=info msg="StartContainer for \"26b3428efb80789472c470c3dfe33d2069cefb3c11164ae469638814f3870a07\" returns successfully"
	Dec 09 23:17:58 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:17:58.725522610Z" level=info msg="shim disconnected" id=26b3428efb80789472c470c3dfe33d2069cefb3c11164ae469638814f3870a07 namespace=k8s.io
	Dec 09 23:17:58 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:17:58.725603642Z" level=warning msg="cleaning up after shim disconnected" id=26b3428efb80789472c470c3dfe33d2069cefb3c11164ae469638814f3870a07 namespace=k8s.io
	Dec 09 23:17:58 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:17:58.725669545Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Dec 09 23:17:58 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:17:58.804049711Z" level=info msg="RemoveContainer for \"da1a0b8be74aa037ece31c8c9b11c9746d4c6edfcc05b6aba6726603a8206080\""
	Dec 09 23:17:58 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:17:58.808920426Z" level=info msg="RemoveContainer for \"da1a0b8be74aa037ece31c8c9b11c9746d4c6edfcc05b6aba6726603a8206080\" returns successfully"
	
	
	==> coredns [99d9ed2f5b230d0319b2465314133899b5c950239a3b96d9f4feb405f1b18650] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:40089 - 34616 "HINFO IN 8891620888748784360.2156695852859657496. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013919513s
	
	
	==> coredns [dae54bdfe8b6d504e8061ba13c99a15682b561cf9bca5574313dd3097076811b] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:56340 - 39104 "HINFO IN 1986183853171438515.8557334799197777396. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020908198s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I1209 23:15:14.125168       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-12-09 23:14:44.124585579 +0000 UTC m=+0.077622778) (total time: 30.00044794s):
	Trace[2019727887]: [30.00044794s] [30.00044794s] END
	E1209 23:15:14.125203       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I1209 23:15:14.125410       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-12-09 23:14:44.125035532 +0000 UTC m=+0.078072731) (total time: 30.00036132s):
	Trace[939984059]: [30.00036132s] [30.00036132s] END
	E1209 23:15:14.125424       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I1209 23:15:14.125712       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-12-09 23:14:44.125354181 +0000 UTC m=+0.078391388) (total time: 30.000334481s):
	Trace[911902081]: [30.000334481s] [30.000334481s] END
	E1209 23:15:14.125726       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-098617
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-098617
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5
	                    minikube.k8s.io/name=old-k8s-version-098617
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_09T23_11_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 23:11:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-098617
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 23:20:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 23:15:30 +0000   Mon, 09 Dec 2024 23:11:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 23:15:30 +0000   Mon, 09 Dec 2024 23:11:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 23:15:30 +0000   Mon, 09 Dec 2024 23:11:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 23:15:30 +0000   Mon, 09 Dec 2024 23:12:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-098617
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 db277c901b564588b69668d59c8f2e19
	  System UUID:                2b872bb4-9aec-411e-96f8-88189f87523b
	  Boot ID:                    982d10f7-311f-4ebf-96b3-48403acdb647
	  Kernel Version:             5.15.0-1072-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m43s
	  kube-system                 coredns-74ff55c5b-tz959                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m24s
	  kube-system                 etcd-old-k8s-version-098617                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m31s
	  kube-system                 kindnet-8g8xl                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m24s
	  kube-system                 kube-apiserver-old-k8s-version-098617             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m31s
	  kube-system                 kube-controller-manager-old-k8s-version-098617    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m31s
	  kube-system                 kube-proxy-d8xtk                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m24s
	  kube-system                 kube-scheduler-old-k8s-version-098617             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m31s
	  kube-system                 metrics-server-9975d5f86-4rw7k                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m32s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m23s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-hmfdq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m35s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-9w2zp               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m51s (x4 over 8m51s)  kubelet     Node old-k8s-version-098617 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m51s (x5 over 8m51s)  kubelet     Node old-k8s-version-098617 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m51s (x4 over 8m51s)  kubelet     Node old-k8s-version-098617 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m31s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m31s                  kubelet     Node old-k8s-version-098617 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m31s                  kubelet     Node old-k8s-version-098617 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m31s                  kubelet     Node old-k8s-version-098617 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m31s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m24s                  kubelet     Node old-k8s-version-098617 status is now: NodeReady
	  Normal  Starting                 8m23s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m4s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m4s (x8 over 6m4s)    kubelet     Node old-k8s-version-098617 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m4s (x8 over 6m4s)    kubelet     Node old-k8s-version-098617 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m4s (x7 over 6m4s)    kubelet     Node old-k8s-version-098617 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m4s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m47s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Dec 9 22:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.013902] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.481128] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026434] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.030455] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.016714] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.643686] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.085449] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 9 23:03] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [063e1c49d2c947a9f19b4fae6421961ea9a67ce263a258ff3303dcc0ab203f1c] <==
	raft2024/12/09 23:11:42 INFO: ea7e25599daad906 became candidate at term 2
	raft2024/12/09 23:11:42 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	raft2024/12/09 23:11:42 INFO: ea7e25599daad906 became leader at term 2
	raft2024/12/09 23:11:42 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2024-12-09 23:11:42.437698 I | etcdserver: published {Name:old-k8s-version-098617 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2024-12-09 23:11:42.437723 I | embed: ready to serve client requests
	2024-12-09 23:11:42.439366 I | embed: serving client requests on 192.168.76.2:2379
	2024-12-09 23:11:42.439444 I | embed: ready to serve client requests
	2024-12-09 23:11:42.447624 I | etcdserver: setting up the initial cluster version to 3.4
	2024-12-09 23:11:42.448191 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-12-09 23:11:42.461921 I | embed: serving client requests on 127.0.0.1:2379
	2024-12-09 23:11:42.508996 I | etcdserver/api: enabled capabilities for version 3.4
	2024-12-09 23:11:51.098935 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 23:12:06.203215 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 23:12:08.878609 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 23:12:18.878016 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 23:12:28.877906 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 23:12:38.878044 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 23:12:48.877825 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 23:12:58.878002 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 23:13:08.878019 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 23:13:18.877886 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 23:13:28.877963 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 23:13:38.877765 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 23:13:48.878021 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [55743b620c44bd40d4ac5faf5671b922b7011a15211b36e43cf05dad9e0fdbfd] <==
	2024-12-09 23:16:23.538683 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 23:16:33.538659 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 23:16:43.538687 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 23:16:53.538682 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 23:17:03.538819 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 23:17:13.538734 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 23:17:23.538658 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 23:17:33.538892 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 23:17:43.538640 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 23:17:53.539064 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 23:18:03.538734 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 23:18:13.538911 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 23:18:23.538787 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 23:18:33.538687 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 23:18:43.538657 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 23:18:53.538877 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 23:19:03.539633 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 23:19:13.539471 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 23:19:23.538757 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 23:19:33.538837 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 23:19:43.538679 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 23:19:53.538832 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 23:20:03.538646 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 23:20:13.538644 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-09 23:20:23.538779 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 23:20:32 up  1:02,  0 users,  load average: 2.03, 2.27, 2.62
	Linux old-k8s-version-098617 5.15.0-1072-aws #78~20.04.1-Ubuntu SMP Wed Oct 9 15:29:54 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [394606f289ebff6412cd8423f28e2ba1a7918b8e4eac2870a5c1825e8e571eee] <==
	I1209 23:18:23.820304       1 main.go:301] handling current node
	I1209 23:18:33.822944       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1209 23:18:33.822980       1 main.go:301] handling current node
	I1209 23:18:43.814982       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1209 23:18:43.815046       1 main.go:301] handling current node
	I1209 23:18:53.820290       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1209 23:18:53.820326       1 main.go:301] handling current node
	I1209 23:19:03.823050       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1209 23:19:03.823081       1 main.go:301] handling current node
	I1209 23:19:13.822927       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1209 23:19:13.822964       1 main.go:301] handling current node
	I1209 23:19:23.818902       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1209 23:19:23.818943       1 main.go:301] handling current node
	I1209 23:19:33.822794       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1209 23:19:33.822880       1 main.go:301] handling current node
	I1209 23:19:43.815016       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1209 23:19:43.815053       1 main.go:301] handling current node
	I1209 23:19:53.818593       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1209 23:19:53.818626       1 main.go:301] handling current node
	I1209 23:20:03.823593       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1209 23:20:03.823626       1 main.go:301] handling current node
	I1209 23:20:13.823606       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1209 23:20:13.823651       1 main.go:301] handling current node
	I1209 23:20:23.818791       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1209 23:20:23.818943       1 main.go:301] handling current node
	
	
	==> kindnet [5125ce4b5b492867a27c0ac7a7b0e99ee7d2c899aba434ce230c8fe5eb273f38] <==
	I1209 23:12:12.103830       1 controller.go:365] Waiting for informer caches to sync
	I1209 23:12:12.103836       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I1209 23:12:12.404010       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I1209 23:12:12.404103       1 metrics.go:61] Registering metrics
	I1209 23:12:12.404202       1 controller.go:401] Syncing nftables rules
	I1209 23:12:22.112365       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1209 23:12:22.112411       1 main.go:301] handling current node
	I1209 23:12:32.103231       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1209 23:12:32.103272       1 main.go:301] handling current node
	I1209 23:12:42.104180       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1209 23:12:42.104302       1 main.go:301] handling current node
	I1209 23:12:52.108777       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1209 23:12:52.108811       1 main.go:301] handling current node
	I1209 23:13:02.111878       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1209 23:13:02.111912       1 main.go:301] handling current node
	I1209 23:13:12.103906       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1209 23:13:12.103939       1 main.go:301] handling current node
	I1209 23:13:22.108837       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1209 23:13:22.108873       1 main.go:301] handling current node
	I1209 23:13:32.112785       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1209 23:13:32.112821       1 main.go:301] handling current node
	I1209 23:13:42.112681       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1209 23:13:42.112782       1 main.go:301] handling current node
	I1209 23:13:52.103191       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1209 23:13:52.103223       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6d1ffef5c3c113df0c1c5643d627d680020df30e159ab4a69ccf738c6f7c09e6] <==
	I1209 23:11:49.390946       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1209 23:11:49.390978       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1209 23:11:49.404120       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I1209 23:11:49.407872       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I1209 23:11:49.407898       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I1209 23:11:49.899600       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1209 23:11:49.951414       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1209 23:11:50.053931       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1209 23:11:50.055431       1 controller.go:606] quota admission added evaluator for: endpoints
	I1209 23:11:50.061210       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1209 23:11:51.107531       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I1209 23:11:51.584139       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I1209 23:11:51.682149       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I1209 23:12:00.203482       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I1209 23:12:07.184429       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I1209 23:12:07.207144       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I1209 23:12:17.509983       1 client.go:360] parsed scheme: "passthrough"
	I1209 23:12:17.510263       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1209 23:12:17.510282       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1209 23:13:01.769153       1 client.go:360] parsed scheme: "passthrough"
	I1209 23:13:01.769363       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1209 23:13:01.769434       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1209 23:13:33.961383       1 client.go:360] parsed scheme: "passthrough"
	I1209 23:13:33.961428       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1209 23:13:33.961461       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [9d1b42abf41370b358213f4f369435098a79f49c98a0b87c365dbfb7068093df] <==
	I1209 23:16:59.541165       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1209 23:16:59.541202       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W1209 23:17:43.931521       1 handler_proxy.go:102] no RequestInfo found in the context
	E1209 23:17:43.931608       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1209 23:17:43.931625       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1209 23:17:44.386193       1 client.go:360] parsed scheme: "passthrough"
	I1209 23:17:44.386384       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1209 23:17:44.386476       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1209 23:18:15.389343       1 client.go:360] parsed scheme: "passthrough"
	I1209 23:18:15.389531       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1209 23:18:15.389551       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1209 23:18:48.092210       1 client.go:360] parsed scheme: "passthrough"
	I1209 23:18:48.092268       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1209 23:18:48.092278       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1209 23:19:28.553359       1 client.go:360] parsed scheme: "passthrough"
	I1209 23:19:28.553407       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1209 23:19:28.553418       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W1209 23:19:40.417661       1 handler_proxy.go:102] no RequestInfo found in the context
	E1209 23:19:40.417735       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1209 23:19:40.417752       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1209 23:20:01.812268       1 client.go:360] parsed scheme: "passthrough"
	I1209 23:20:01.812318       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1209 23:20:01.812338       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [10660454cbd9d4da094cb8f100e7feceef0b146ea7a208113cba972405412cf4] <==
	E1209 23:16:28.204822       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1209 23:16:36.143200       1 request.go:655] Throttling request took 1.047762695s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W1209 23:16:36.994628       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1209 23:16:58.708378       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1209 23:17:08.645311       1 request.go:655] Throttling request took 1.045506116s, request: GET:https://192.168.76.2:8443/apis/events.k8s.io/v1beta1?timeout=32s
	W1209 23:17:09.496536       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1209 23:17:29.211862       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1209 23:17:41.147090       1 request.go:655] Throttling request took 1.048146193s, request: GET:https://192.168.76.2:8443/apis/networking.k8s.io/v1beta1?timeout=32s
	W1209 23:17:41.998477       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1209 23:17:59.714009       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1209 23:18:13.648902       1 request.go:655] Throttling request took 1.048271957s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W1209 23:18:14.500370       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1209 23:18:30.215916       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1209 23:18:46.151073       1 request.go:655] Throttling request took 1.046410694s, request: GET:https://192.168.76.2:8443/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s
	W1209 23:18:47.002646       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1209 23:19:00.721006       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1209 23:19:18.653095       1 request.go:655] Throttling request took 1.048506806s, request: GET:https://192.168.76.2:8443/apis/apps/v1?timeout=32s
	W1209 23:19:19.504961       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1209 23:19:31.222929       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1209 23:19:51.155286       1 request.go:655] Throttling request took 1.048237346s, request: GET:https://192.168.76.2:8443/apis/networking.k8s.io/v1?timeout=32s
	W1209 23:19:52.006944       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1209 23:20:01.725731       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1209 23:20:23.658185       1 request.go:655] Throttling request took 1.04493748s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W1209 23:20:24.509801       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1209 23:20:32.227516       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-controller-manager [f68628204e6f96e082e5b48a6dc631b0a69e7de46bf5da75a9ca3e6911da3442] <==
	I1209 23:12:07.166862       1 shared_informer.go:247] Caches are synced for daemon sets 
	I1209 23:12:07.178498       1 shared_informer.go:247] Caches are synced for attach detach 
	I1209 23:12:07.217068       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I1209 23:12:07.222308       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I1209 23:12:07.234451       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-tz959"
	I1209 23:12:07.272105       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-86hlm"
	I1209 23:12:07.284225       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I1209 23:12:07.284288       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-d8xtk"
	I1209 23:12:07.284301       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-8g8xl"
	I1209 23:12:07.303365       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I1209 23:12:07.303514       1 shared_informer.go:247] Caches are synced for resource quota 
	I1209 23:12:07.324840       1 shared_informer.go:247] Caches are synced for stateful set 
	I1209 23:12:07.341260       1 shared_informer.go:247] Caches are synced for disruption 
	I1209 23:12:07.341286       1 disruption.go:339] Sending events to api server.
	I1209 23:12:07.510097       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	E1209 23:12:07.649252       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"80b2b055-de84-4bea-9f10-0df319d00f9e", ResourceVersion:"412", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63869382711, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4000f9e020), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4000f9e040)}, v1.ManagedFieldsEntry{Manager:"kube-co
ntroller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4000f9e060), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4000f9e080)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4000f9e100), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElastic
BlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4001b34f80), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSour
ce)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000f9e120), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSo
urce)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000f9e140), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil),
Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil),
WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4000f9e180)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"F
ile", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40004748a0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001513ad8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40004f0070), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)
(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4000114b20)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001513b28)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest ve
rsion and try again
	I1209 23:12:07.710285       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1209 23:12:07.731074       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1209 23:12:07.731102       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1209 23:12:07.750455       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I1209 23:12:07.750496       1 shared_informer.go:247] Caches are synced for resource quota 
	I1209 23:12:08.194127       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I1209 23:12:08.212381       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-86hlm"
	I1209 23:12:12.105707       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1209 23:13:57.984954       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	
	
	==> kube-proxy [3a123be1d317e1e4f654bafa3493726c60356903a91f6e64d7b29782641f2d50] <==
	I1209 23:14:43.986481       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I1209 23:14:43.986550       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W1209 23:14:44.159585       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I1209 23:14:44.159684       1 server_others.go:185] Using iptables Proxier.
	I1209 23:14:44.159913       1 server.go:650] Version: v1.20.0
	I1209 23:14:44.160404       1 config.go:315] Starting service config controller
	I1209 23:14:44.160421       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1209 23:14:44.184838       1 config.go:224] Starting endpoint slice config controller
	I1209 23:14:44.184868       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1209 23:14:44.264813       1 shared_informer.go:247] Caches are synced for service config 
	I1209 23:14:44.287243       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-proxy [a33fca2389d21d809231c03f4d59c7c6edd2b935f0a6bee69e06642b5d121aad] <==
	I1209 23:12:08.695219       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I1209 23:12:08.695473       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W1209 23:12:08.715529       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I1209 23:12:08.715607       1 server_others.go:185] Using iptables Proxier.
	I1209 23:12:08.715815       1 server.go:650] Version: v1.20.0
	I1209 23:12:08.716306       1 config.go:315] Starting service config controller
	I1209 23:12:08.716321       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1209 23:12:08.718325       1 config.go:224] Starting endpoint slice config controller
	I1209 23:12:08.718343       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1209 23:12:08.817732       1 shared_informer.go:247] Caches are synced for service config 
	I1209 23:12:08.818600       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [5693d8f440cbbfff0094faddfc750157e27b436336b58b60026d6e4b6afb7c19] <==
	I1209 23:11:44.274547       1 serving.go:331] Generated self-signed cert in-memory
	W1209 23:11:48.590660       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1209 23:11:48.590779       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1209 23:11:48.590930       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1209 23:11:48.590939       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1209 23:11:48.649529       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I1209 23:11:48.651872       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1209 23:11:48.651908       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1209 23:11:48.651926       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1209 23:11:48.674683       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1209 23:11:48.675978       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1209 23:11:48.677468       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1209 23:11:48.677892       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1209 23:11:48.678178       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1209 23:11:48.678444       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1209 23:11:48.678852       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1209 23:11:48.683030       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1209 23:11:48.684866       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1209 23:11:48.685025       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1209 23:11:48.685125       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1209 23:11:48.685305       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1209 23:11:49.758882       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1209 23:11:52.852067       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [e3e7eabe1dad882678b1f0f2e7a8cff160d9b5e4146196f53ed8533082a0103e] <==
	I1209 23:14:33.729740       1 serving.go:331] Generated self-signed cert in-memory
	W1209 23:14:39.387314       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1209 23:14:39.387541       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1209 23:14:39.387668       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1209 23:14:39.387743       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1209 23:14:40.013274       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I1209 23:14:40.023341       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1209 23:14:40.023366       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1209 23:14:40.023395       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I1209 23:14:40.231339       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Dec 09 23:18:52 old-k8s-version-098617 kubelet[661]: E1209 23:18:52.594642     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	Dec 09 23:18:56 old-k8s-version-098617 kubelet[661]: E1209 23:18:56.595179     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 09 23:19:05 old-k8s-version-098617 kubelet[661]: I1209 23:19:05.594480     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 26b3428efb80789472c470c3dfe33d2069cefb3c11164ae469638814f3870a07
	Dec 09 23:19:05 old-k8s-version-098617 kubelet[661]: E1209 23:19:05.595377     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	Dec 09 23:19:09 old-k8s-version-098617 kubelet[661]: E1209 23:19:09.601501     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 09 23:19:17 old-k8s-version-098617 kubelet[661]: I1209 23:19:17.598517     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 26b3428efb80789472c470c3dfe33d2069cefb3c11164ae469638814f3870a07
	Dec 09 23:19:17 old-k8s-version-098617 kubelet[661]: E1209 23:19:17.599504     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	Dec 09 23:19:23 old-k8s-version-098617 kubelet[661]: E1209 23:19:23.595388     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 09 23:19:28 old-k8s-version-098617 kubelet[661]: I1209 23:19:28.594351     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 26b3428efb80789472c470c3dfe33d2069cefb3c11164ae469638814f3870a07
	Dec 09 23:19:28 old-k8s-version-098617 kubelet[661]: E1209 23:19:28.594771     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	Dec 09 23:19:34 old-k8s-version-098617 kubelet[661]: E1209 23:19:34.595081     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 09 23:19:40 old-k8s-version-098617 kubelet[661]: I1209 23:19:40.594408     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 26b3428efb80789472c470c3dfe33d2069cefb3c11164ae469638814f3870a07
	Dec 09 23:19:40 old-k8s-version-098617 kubelet[661]: E1209 23:19:40.595347     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	Dec 09 23:19:48 old-k8s-version-098617 kubelet[661]: E1209 23:19:48.595049     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 09 23:19:54 old-k8s-version-098617 kubelet[661]: I1209 23:19:54.594470     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 26b3428efb80789472c470c3dfe33d2069cefb3c11164ae469638814f3870a07
	Dec 09 23:19:54 old-k8s-version-098617 kubelet[661]: E1209 23:19:54.595330     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	Dec 09 23:20:01 old-k8s-version-098617 kubelet[661]: E1209 23:20:01.601974     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 09 23:20:09 old-k8s-version-098617 kubelet[661]: I1209 23:20:09.599508     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 26b3428efb80789472c470c3dfe33d2069cefb3c11164ae469638814f3870a07
	Dec 09 23:20:09 old-k8s-version-098617 kubelet[661]: E1209 23:20:09.599956     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	Dec 09 23:20:12 old-k8s-version-098617 kubelet[661]: E1209 23:20:12.596822     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 09 23:20:20 old-k8s-version-098617 kubelet[661]: I1209 23:20:20.594389     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 26b3428efb80789472c470c3dfe33d2069cefb3c11164ae469638814f3870a07
	Dec 09 23:20:20 old-k8s-version-098617 kubelet[661]: E1209 23:20:20.594826     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	Dec 09 23:20:27 old-k8s-version-098617 kubelet[661]: E1209 23:20:27.596077     661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 09 23:20:32 old-k8s-version-098617 kubelet[661]: I1209 23:20:32.594484     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 26b3428efb80789472c470c3dfe33d2069cefb3c11164ae469638814f3870a07
	Dec 09 23:20:32 old-k8s-version-098617 kubelet[661]: E1209 23:20:32.595347     661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
	
	
	==> kubernetes-dashboard [f5e0a0afceebb969d8da6457fcca1f6b9964499a31fb1842750ccb5a3884ddf2] <==
	2024/12/09 23:15:06 Using namespace: kubernetes-dashboard
	2024/12/09 23:15:06 Using in-cluster config to connect to apiserver
	2024/12/09 23:15:06 Using secret token for csrf signing
	2024/12/09 23:15:06 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/12/09 23:15:06 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/12/09 23:15:06 Successful initial request to the apiserver, version: v1.20.0
	2024/12/09 23:15:06 Generating JWE encryption key
	2024/12/09 23:15:06 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/12/09 23:15:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/12/09 23:15:06 Initializing JWE encryption key from synchronized object
	2024/12/09 23:15:06 Creating in-cluster Sidecar client
	2024/12/09 23:15:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/09 23:15:06 Serving insecurely on HTTP port: 9090
	2024/12/09 23:15:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/09 23:16:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/09 23:16:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/09 23:17:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/09 23:17:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/09 23:18:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/09 23:18:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/09 23:19:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/09 23:19:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/09 23:20:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/09 23:15:06 Starting overwatch
	
	
	==> storage-provisioner [5cdaa2e6255fc4d96282bee6cb565b6257b645c8f8ad628144066bc11b36d0aa] <==
	I1209 23:15:29.768665       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1209 23:15:29.782091       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1209 23:15:29.784787       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1209 23:15:47.274163       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1209 23:15:47.274660       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7399b432-4f3e-4b63-af8d-3d8a1903dbca", APIVersion:"v1", ResourceVersion:"838", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-098617_0ad3c416-968b-49ab-9b3e-7b8d2929554c became leader
	I1209 23:15:47.279922       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-098617_0ad3c416-968b-49ab-9b3e-7b8d2929554c!
	I1209 23:15:47.381638       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-098617_0ad3c416-968b-49ab-9b3e-7b8d2929554c!
	
	
	==> storage-provisioner [9c62f2e12bccb234691d9df725b23072f6bf214069ff068aea47352ec6a1ef2d] <==
	I1209 23:14:43.564479       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1209 23:15:13.567348       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-098617 -n old-k8s-version-098617
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-098617 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-4rw7k
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-098617 describe pod metrics-server-9975d5f86-4rw7k
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-098617 describe pod metrics-server-9975d5f86-4rw7k: exit status 1 (108.791905ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-4rw7k" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-098617 describe pod metrics-server-9975d5f86-4rw7k: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (382.58s)

                                                
                                    

Test pass (300/330)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.48
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.2/json-events 7.12
13 TestDownloadOnly/v1.31.2/preload-exists 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.07
18 TestDownloadOnly/v1.31.2/DeleteAll 0.21
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.57
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 217.98
29 TestAddons/serial/Volcano 39.89
31 TestAddons/serial/GCPAuth/Namespaces 0.18
32 TestAddons/serial/GCPAuth/FakeCredentials 10.89
35 TestAddons/parallel/Registry 16.72
36 TestAddons/parallel/Ingress 20.65
37 TestAddons/parallel/InspektorGadget 11.87
38 TestAddons/parallel/MetricsServer 6.86
40 TestAddons/parallel/CSI 58.3
41 TestAddons/parallel/Headlamp 15.98
42 TestAddons/parallel/CloudSpanner 6.62
43 TestAddons/parallel/LocalPath 53.02
44 TestAddons/parallel/NvidiaDevicePlugin 5.85
45 TestAddons/parallel/Yakd 11.9
47 TestAddons/StoppedEnableDisable 12.18
48 TestCertOptions 40.44
49 TestCertExpiration 234.12
51 TestForceSystemdFlag 45.77
52 TestForceSystemdEnv 38.67
53 TestDockerEnvContainerd 43.14
58 TestErrorSpam/setup 30.7
59 TestErrorSpam/start 0.76
60 TestErrorSpam/status 1.15
61 TestErrorSpam/pause 1.71
62 TestErrorSpam/unpause 1.77
63 TestErrorSpam/stop 1.46
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 54.01
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.89
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 4.04
75 TestFunctional/serial/CacheCmd/cache/add_local 1.16
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
77 TestFunctional/serial/CacheCmd/cache/list 0.07
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.99
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
83 TestFunctional/serial/ExtraConfig 39.44
84 TestFunctional/serial/ComponentHealth 0.11
85 TestFunctional/serial/LogsCmd 1.59
86 TestFunctional/serial/LogsFileCmd 1.6
87 TestFunctional/serial/InvalidService 4.84
89 TestFunctional/parallel/ConfigCmd 0.47
90 TestFunctional/parallel/DashboardCmd 10.26
91 TestFunctional/parallel/DryRun 0.42
92 TestFunctional/parallel/InternationalLanguage 0.21
93 TestFunctional/parallel/StatusCmd 1.1
97 TestFunctional/parallel/ServiceCmdConnect 8.64
98 TestFunctional/parallel/AddonsCmd 0.2
99 TestFunctional/parallel/PersistentVolumeClaim 25.05
101 TestFunctional/parallel/SSHCmd 0.7
102 TestFunctional/parallel/CpCmd 2.17
104 TestFunctional/parallel/FileSync 0.34
105 TestFunctional/parallel/CertSync 2.24
109 TestFunctional/parallel/NodeLabels 0.11
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.61
113 TestFunctional/parallel/License 0.26
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.6
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.33
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
125 TestFunctional/parallel/ServiceCmd/DeployApp 7.23
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.5
127 TestFunctional/parallel/ProfileCmd/profile_list 0.41
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.53
129 TestFunctional/parallel/ServiceCmd/List 0.59
130 TestFunctional/parallel/MountCmd/any-port 7.4
131 TestFunctional/parallel/ServiceCmd/JSONOutput 0.75
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.42
133 TestFunctional/parallel/ServiceCmd/Format 0.52
134 TestFunctional/parallel/ServiceCmd/URL 0.5
135 TestFunctional/parallel/MountCmd/specific-port 2.52
136 TestFunctional/parallel/MountCmd/VerifyCleanup 2.29
137 TestFunctional/parallel/Version/short 0.09
138 TestFunctional/parallel/Version/components 1.45
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
143 TestFunctional/parallel/ImageCommands/ImageBuild 3.74
144 TestFunctional/parallel/ImageCommands/Setup 0.91
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.46
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.34
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.24
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
150 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.69
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.47
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.6
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.68
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.43
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.01
157 TestFunctional/delete_minikube_cached_images 0.03
161 TestMultiControlPlane/serial/StartCluster 113.18
162 TestMultiControlPlane/serial/DeployApp 43.46
163 TestMultiControlPlane/serial/PingHostFromPods 1.54
164 TestMultiControlPlane/serial/AddWorkerNode 24.75
165 TestMultiControlPlane/serial/NodeLabels 0.12
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.03
167 TestMultiControlPlane/serial/CopyFile 18.74
168 TestMultiControlPlane/serial/StopSecondaryNode 12.73
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.75
170 TestMultiControlPlane/serial/RestartSecondaryNode 19.83
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.16
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 144.99
173 TestMultiControlPlane/serial/DeleteSecondaryNode 10.88
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.78
175 TestMultiControlPlane/serial/StopCluster 36.15
176 TestMultiControlPlane/serial/RestartCluster 77.71
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.76
178 TestMultiControlPlane/serial/AddSecondaryNode 42.15
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.03
183 TestJSONOutput/start/Command 50.76
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.71
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.66
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 5.72
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.22
208 TestKicCustomNetwork/create_custom_network 41.61
209 TestKicCustomNetwork/use_default_bridge_network 35.48
210 TestKicExistingNetwork 32.81
211 TestKicCustomSubnet 33.43
212 TestKicStaticIP 33.51
213 TestMainNoArgs 0.06
214 TestMinikubeProfile 67.98
217 TestMountStart/serial/StartWithMountFirst 6.61
218 TestMountStart/serial/VerifyMountFirst 0.26
219 TestMountStart/serial/StartWithMountSecond 6.29
220 TestMountStart/serial/VerifyMountSecond 0.28
221 TestMountStart/serial/DeleteFirst 1.59
222 TestMountStart/serial/VerifyMountPostDelete 0.26
223 TestMountStart/serial/Stop 1.2
224 TestMountStart/serial/RestartStopped 7.71
225 TestMountStart/serial/VerifyMountPostStop 0.27
228 TestMultiNode/serial/FreshStart2Nodes 75.13
229 TestMultiNode/serial/DeployApp2Nodes 20.3
230 TestMultiNode/serial/PingHostFrom2Pods 1
231 TestMultiNode/serial/AddNode 16.92
232 TestMultiNode/serial/MultiNodeLabels 0.09
233 TestMultiNode/serial/ProfileList 0.68
234 TestMultiNode/serial/CopyFile 10.05
235 TestMultiNode/serial/StopNode 2.24
236 TestMultiNode/serial/StartAfterStop 9.22
237 TestMultiNode/serial/RestartKeepsNodes 81.92
238 TestMultiNode/serial/DeleteNode 5.27
239 TestMultiNode/serial/StopMultiNode 23.9
240 TestMultiNode/serial/RestartMultiNode 53.57
241 TestMultiNode/serial/ValidateNameConflict 36.21
246 TestPreload 111.53
248 TestScheduledStopUnix 105.27
251 TestInsufficientStorage 10.57
252 TestRunningBinaryUpgrade 94.89
254 TestKubernetesUpgrade 360.75
255 TestMissingContainerUpgrade 173.72
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
258 TestNoKubernetes/serial/StartWithK8s 39.05
259 TestNoKubernetes/serial/StartWithStopK8s 17.86
260 TestNoKubernetes/serial/Start 6.21
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
262 TestNoKubernetes/serial/ProfileList 0.97
263 TestNoKubernetes/serial/Stop 1.21
264 TestNoKubernetes/serial/StartNoArgs 6.59
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.38
266 TestStoppedBinaryUpgrade/Setup 0.72
267 TestStoppedBinaryUpgrade/Upgrade 159.59
268 TestStoppedBinaryUpgrade/MinikubeLogs 1.47
277 TestPause/serial/Start 69.93
285 TestNetworkPlugins/group/false 3.79
289 TestPause/serial/SecondStartNoReconfiguration 7.83
290 TestPause/serial/Pause 0.91
291 TestPause/serial/VerifyStatus 0.4
292 TestPause/serial/Unpause 0.86
293 TestPause/serial/PauseAgain 1.1
294 TestPause/serial/DeletePaused 2.8
295 TestPause/serial/VerifyDeletedResources 0.43
297 TestStartStop/group/old-k8s-version/serial/FirstStart 161.15
298 TestStartStop/group/old-k8s-version/serial/DeployApp 8.58
299 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.56
300 TestStartStop/group/old-k8s-version/serial/Stop 12.55
302 TestStartStop/group/no-preload/serial/FirstStart 73.45
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.25
305 TestStartStop/group/no-preload/serial/DeployApp 8.39
306 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.3
307 TestStartStop/group/no-preload/serial/Stop 12.05
308 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
309 TestStartStop/group/no-preload/serial/SecondStart 267.53
310 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
311 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
312 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
313 TestStartStop/group/no-preload/serial/Pause 3.14
315 TestStartStop/group/embed-certs/serial/FirstStart 64.64
316 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
317 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.16
318 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.35
319 TestStartStop/group/old-k8s-version/serial/Pause 3.64
321 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 63.65
322 TestStartStop/group/embed-certs/serial/DeployApp 8.35
323 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.14
324 TestStartStop/group/embed-certs/serial/Stop 12.06
325 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.25
326 TestStartStop/group/embed-certs/serial/SecondStart 266.17
327 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.49
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.77
329 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.39
330 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
331 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 291.66
332 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
333 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
334 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
335 TestStartStop/group/embed-certs/serial/Pause 3.25
337 TestStartStop/group/newest-cni/serial/FirstStart 35.46
338 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
339 TestStartStop/group/newest-cni/serial/DeployApp 0
340 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.36
341 TestStartStop/group/newest-cni/serial/Stop 1.27
342 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
343 TestStartStop/group/newest-cni/serial/SecondStart 21.93
344 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.13
345 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.44
346 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.65
347 TestNetworkPlugins/group/auto/Start 73.51
348 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
349 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
350 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
351 TestStartStop/group/newest-cni/serial/Pause 4.21
352 TestNetworkPlugins/group/kindnet/Start 59.27
353 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
354 TestNetworkPlugins/group/auto/KubeletFlags 0.28
355 TestNetworkPlugins/group/auto/NetCatPod 9.31
356 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
357 TestNetworkPlugins/group/kindnet/NetCatPod 8.28
358 TestNetworkPlugins/group/auto/DNS 0.18
359 TestNetworkPlugins/group/auto/Localhost 0.16
360 TestNetworkPlugins/group/auto/HairPin 0.15
361 TestNetworkPlugins/group/kindnet/DNS 0.32
362 TestNetworkPlugins/group/kindnet/Localhost 0.15
363 TestNetworkPlugins/group/kindnet/HairPin 0.15
364 TestNetworkPlugins/group/calico/Start 71.58
365 TestNetworkPlugins/group/custom-flannel/Start 59.33
366 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
367 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.35
368 TestNetworkPlugins/group/calico/ControllerPod 6.01
369 TestNetworkPlugins/group/custom-flannel/DNS 0.2
370 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
371 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
372 TestNetworkPlugins/group/calico/KubeletFlags 0.28
373 TestNetworkPlugins/group/calico/NetCatPod 10.27
374 TestNetworkPlugins/group/calico/DNS 0.22
375 TestNetworkPlugins/group/calico/Localhost 0.22
376 TestNetworkPlugins/group/calico/HairPin 0.23
377 TestNetworkPlugins/group/enable-default-cni/Start 76.41
378 TestNetworkPlugins/group/flannel/Start 56.53
379 TestNetworkPlugins/group/flannel/ControllerPod 6.01
380 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
381 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.31
382 TestNetworkPlugins/group/flannel/KubeletFlags 0.34
383 TestNetworkPlugins/group/flannel/NetCatPod 10.29
384 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
385 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
386 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
387 TestNetworkPlugins/group/flannel/DNS 0.24
388 TestNetworkPlugins/group/flannel/Localhost 0.19
389 TestNetworkPlugins/group/flannel/HairPin 0.16
390 TestNetworkPlugins/group/bridge/Start 69.99
391 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
392 TestNetworkPlugins/group/bridge/NetCatPod 9.26
393 TestNetworkPlugins/group/bridge/DNS 0.16
394 TestNetworkPlugins/group/bridge/Localhost 0.14
395 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (8.48s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-720047 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-720047 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (8.475811911s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.48s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1209 22:26:02.047926    7684 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I1209 22:26:02.048009    7684 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19888-2244/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-720047
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-720047: exit status 85 (73.347939ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-720047 | jenkins | v1.34.0 | 09 Dec 24 22:25 UTC |          |
	|         | -p download-only-720047        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 22:25:53
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.23.2 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 22:25:53.618978    7690 out.go:345] Setting OutFile to fd 1 ...
	I1209 22:25:53.619194    7690 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:25:53.619221    7690 out.go:358] Setting ErrFile to fd 2...
	I1209 22:25:53.619239    7690 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:25:53.619522    7690 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-2244/.minikube/bin
	W1209 22:25:53.619696    7690 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19888-2244/.minikube/config/config.json: open /home/jenkins/minikube-integration/19888-2244/.minikube/config/config.json: no such file or directory
	I1209 22:25:53.620155    7690 out.go:352] Setting JSON to true
	I1209 22:25:53.620995    7690 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":501,"bootTime":1733782653,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1209 22:25:53.621091    7690 start.go:139] virtualization:  
	I1209 22:25:53.623615    7690 out.go:97] [download-only-720047] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1209 22:25:53.625461    7690 out.go:169] MINIKUBE_LOCATION=19888
	I1209 22:25:53.626487    7690 notify.go:220] Checking for updates...
	W1209 22:25:53.626595    7690 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19888-2244/.minikube/cache/preloaded-tarball: no such file or directory
	I1209 22:25:53.629181    7690 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 22:25:53.630622    7690 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19888-2244/kubeconfig
	I1209 22:25:53.632317    7690 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-2244/.minikube
	I1209 22:25:53.633953    7690 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1209 22:25:53.636749    7690 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1209 22:25:53.637022    7690 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 22:25:53.668373    7690 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1209 22:25:53.668473    7690 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 22:25:54.082274    7690 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-09 22:25:54.072869923 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1209 22:25:54.082380    7690 docker.go:318] overlay module found
	I1209 22:25:54.083944    7690 out.go:97] Using the docker driver based on user configuration
	I1209 22:25:54.083971    7690 start.go:297] selected driver: docker
	I1209 22:25:54.083978    7690 start.go:901] validating driver "docker" against <nil>
	I1209 22:25:54.084082    7690 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 22:25:54.142030    7690 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-09 22:25:54.133936675 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1209 22:25:54.142242    7690 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 22:25:54.142520    7690 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1209 22:25:54.142688    7690 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 22:25:54.144562    7690 out.go:169] Using Docker driver with root privileges
	I1209 22:25:54.146181    7690 cni.go:84] Creating CNI manager for ""
	I1209 22:25:54.146244    7690 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 22:25:54.146257    7690 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 22:25:54.146329    7690 start.go:340] cluster config:
	{Name:download-only-720047 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-720047 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 22:25:54.147770    7690 out.go:97] Starting "download-only-720047" primary control-plane node in "download-only-720047" cluster
	I1209 22:25:54.147790    7690 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1209 22:25:54.149107    7690 out.go:97] Pulling base image v0.0.45-1730888964-19917 ...
	I1209 22:25:54.149134    7690 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1209 22:25:54.149273    7690 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1209 22:25:54.165037    7690 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1209 22:25:54.165276    7690 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory
	I1209 22:25:54.165376    7690 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1209 22:25:54.212774    7690 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I1209 22:25:54.212799    7690 cache.go:56] Caching tarball of preloaded images
	I1209 22:25:54.212952    7690 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1209 22:25:54.214545    7690 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1209 22:25:54.214585    7690 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I1209 22:25:54.300514    7690 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19888-2244/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-720047 host does not exist
	  To start a cluster, run: "minikube start -p download-only-720047"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-720047
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (7.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-504524 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-504524 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (7.124179811s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (7.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1209 22:26:09.593003    7684 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
I1209 22:26:09.593039    7684 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19888-2244/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-504524
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-504524: exit status 85 (74.297592ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-720047 | jenkins | v1.34.0 | 09 Dec 24 22:25 UTC |                     |
	|         | -p download-only-720047        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 09 Dec 24 22:26 UTC | 09 Dec 24 22:26 UTC |
	| delete  | -p download-only-720047        | download-only-720047 | jenkins | v1.34.0 | 09 Dec 24 22:26 UTC | 09 Dec 24 22:26 UTC |
	| start   | -o=json --download-only        | download-only-504524 | jenkins | v1.34.0 | 09 Dec 24 22:26 UTC |                     |
	|         | -p download-only-504524        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 22:26:02
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.23.2 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 22:26:02.516609    7888 out.go:345] Setting OutFile to fd 1 ...
	I1209 22:26:02.516771    7888 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:26:02.516782    7888 out.go:358] Setting ErrFile to fd 2...
	I1209 22:26:02.516787    7888 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:26:02.517067    7888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-2244/.minikube/bin
	I1209 22:26:02.517490    7888 out.go:352] Setting JSON to true
	I1209 22:26:02.518250    7888 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":509,"bootTime":1733782653,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1209 22:26:02.518319    7888 start.go:139] virtualization:  
	I1209 22:26:02.520176    7888 out.go:97] [download-only-504524] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1209 22:26:02.520459    7888 notify.go:220] Checking for updates...
	I1209 22:26:02.521717    7888 out.go:169] MINIKUBE_LOCATION=19888
	I1209 22:26:02.523235    7888 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 22:26:02.524645    7888 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19888-2244/kubeconfig
	I1209 22:26:02.526248    7888 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-2244/.minikube
	I1209 22:26:02.527475    7888 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1209 22:26:02.529869    7888 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1209 22:26:02.530230    7888 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 22:26:02.549869    7888 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1209 22:26:02.549973    7888 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 22:26:02.610330    7888 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-09 22:26:02.601550644 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1209 22:26:02.610442    7888 docker.go:318] overlay module found
	I1209 22:26:02.611844    7888 out.go:97] Using the docker driver based on user configuration
	I1209 22:26:02.611872    7888 start.go:297] selected driver: docker
	I1209 22:26:02.611878    7888 start.go:901] validating driver "docker" against <nil>
	I1209 22:26:02.611985    7888 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 22:26:02.661809    7888 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-09 22:26:02.653197472 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1209 22:26:02.662015    7888 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 22:26:02.662288    7888 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1209 22:26:02.662446    7888 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 22:26:02.663940    7888 out.go:169] Using Docker driver with root privileges
	I1209 22:26:02.665226    7888 cni.go:84] Creating CNI manager for ""
	I1209 22:26:02.665283    7888 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 22:26:02.665299    7888 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 22:26:02.665364    7888 start.go:340] cluster config:
	{Name:download-only-504524 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-504524 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 22:26:02.667438    7888 out.go:97] Starting "download-only-504524" primary control-plane node in "download-only-504524" cluster
	I1209 22:26:02.667462    7888 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1209 22:26:02.668654    7888 out.go:97] Pulling base image v0.0.45-1730888964-19917 ...
	I1209 22:26:02.668680    7888 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
	I1209 22:26:02.668803    7888 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1209 22:26:02.684507    7888 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1209 22:26:02.684613    7888 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory
	I1209 22:26:02.684634    7888 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory, skipping pull
	I1209 22:26:02.684640    7888 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in cache, skipping pull
	I1209 22:26:02.684647    7888 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 as a tarball
	I1209 22:26:02.732136    7888 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4
	I1209 22:26:02.732170    7888 cache.go:56] Caching tarball of preloaded images
	I1209 22:26:02.732324    7888 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
	I1209 22:26:02.733981    7888 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1209 22:26:02.734006    7888 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4 ...
	I1209 22:26:02.821993    7888 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4?checksum=md5:5a1c96cd03f848c5b0e8fb66f315acd5 -> /home/jenkins/minikube-integration/19888-2244/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-504524 host does not exist
	  To start a cluster, run: "minikube start -p download-only-504524"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-504524
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
I1209 22:26:10.880557    7684 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-176113 --alsologtostderr --binary-mirror http://127.0.0.1:41421 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-176113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-176113
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-013873
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-013873: exit status 85 (62.044847ms)

                                                
                                                
-- stdout --
	* Profile "addons-013873" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-013873"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-013873
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-013873: exit status 85 (81.282154ms)

                                                
                                                
-- stdout --
	* Profile "addons-013873" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-013873"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (217.98s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-013873 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-013873 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m37.98224912s)
--- PASS: TestAddons/Setup (217.98s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.89s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:815: volcano-admission stabilized in 50.086555ms
addons_test.go:823: volcano-controller stabilized in 50.169869ms
addons_test.go:807: volcano-scheduler stabilized in 51.470788ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-w64bw" [cae75d11-7462-4ce8-941a-e8f6ec107386] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004587872s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-zqj64" [98a4138e-1c72-4283-b438-db2c9fb0464f] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004539067s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-9vwmz" [bdbd6cd4-b01c-49f4-b754-15e9a590726f] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003514364s
addons_test.go:842: (dbg) Run:  kubectl --context addons-013873 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-013873 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-013873 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [70b5b01d-3e47-4db2-afd7-dc5c9a322d62] Pending
helpers_test.go:344: "test-job-nginx-0" [70b5b01d-3e47-4db2-afd7-dc5c9a322d62] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [70b5b01d-3e47-4db2-afd7-dc5c9a322d62] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.004067129s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-013873 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-013873 addons disable volcano --alsologtostderr -v=1: (11.182274049s)
--- PASS: TestAddons/serial/Volcano (39.89s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-013873 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-013873 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.89s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-013873 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-013873 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b5a226c7-0997-4ce9-aedd-5bc5ee802d0c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b5a226c7-0997-4ce9-aedd-5bc5ee802d0c] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003748839s
addons_test.go:633: (dbg) Run:  kubectl --context addons-013873 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-013873 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-013873 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-013873 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.89s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 3.477209ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-5cc95cd69-k94xj" [bc21dda5-9b1b-4dd9-adf4-38f8d3c50dd9] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.002959321s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-6dm6h" [5e06e394-1197-476f-bc63-0c9d0a984d9a] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004437354s
addons_test.go:331: (dbg) Run:  kubectl --context addons-013873 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-013873 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-013873 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.762954949s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-013873 ip
2024/12/09 22:31:05 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-013873 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.72s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-013873 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-013873 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-013873 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [ec694cd2-06b2-44de-b696-52bbd6ae153b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [ec694cd2-06b2-44de-b696-52bbd6ae153b] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003944602s
I1209 22:32:28.622168    7684 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-013873 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-013873 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-013873 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-013873 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-013873 addons disable ingress-dns --alsologtostderr -v=1: (1.230555636s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-013873 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-013873 addons disable ingress --alsologtostderr -v=1: (7.760992551s)
--- PASS: TestAddons/parallel/Ingress (20.65s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.87s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-lm7qt" [55aa94d4-6c76-4f30-95d3-a6618ff9ca52] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.006409733s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-013873 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-013873 addons disable inspektor-gadget --alsologtostderr -v=1: (5.856800146s)
--- PASS: TestAddons/parallel/InspektorGadget (11.87s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.86s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 2.856009ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-wmcfl" [56da8dfe-e495-4177-a05d-1da51644e073] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004773417s
addons_test.go:402: (dbg) Run:  kubectl --context addons-013873 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-013873 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.86s)

                                                
                                    
x
+
TestAddons/parallel/CSI (58.3s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1209 22:31:29.852803    7684 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1209 22:31:29.859829    7684 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1209 22:31:29.859946    7684 kapi.go:107] duration metric: took 10.051376ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 10.08372ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-013873 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-013873 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-013873 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-013873 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-013873 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-013873 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-013873 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-013873 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-013873 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-013873 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-013873 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-013873 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-013873 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-013873 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-013873 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [e4b61c10-7890-4e4b-8d04-7af1427f8ed7] Pending
helpers_test.go:344: "task-pv-pod" [e4b61c10-7890-4e4b-8d04-7af1427f8ed7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [e4b61c10-7890-4e4b-8d04-7af1427f8ed7] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003945789s
addons_test.go:511: (dbg) Run:  kubectl --context addons-013873 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-013873 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-013873 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-013873 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-013873 delete pod task-pv-pod: (1.109877092s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-013873 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-013873 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-013873 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-013873 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-013873 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-013873 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-013873 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-013873 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-013873 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-013873 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-013873 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-013873 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-013873 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-013873 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-013873 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-013873 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-013873 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-013873 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-013873 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-013873 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-013873 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-013873 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-013873 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-013873 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [343a5525-ebf0-4f7b-a4a6-0b0a1e10e4fc] Pending
helpers_test.go:344: "task-pv-pod-restore" [343a5525-ebf0-4f7b-a4a6-0b0a1e10e4fc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [343a5525-ebf0-4f7b-a4a6-0b0a1e10e4fc] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003924482s
addons_test.go:553: (dbg) Run:  kubectl --context addons-013873 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-013873 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-013873 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-013873 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-013873 addons disable volumesnapshots --alsologtostderr -v=1: (1.06330623s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-013873 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-013873 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.941529634s)
--- PASS: TestAddons/parallel/CSI (58.30s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-013873 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-013873 --alsologtostderr -v=1: (1.175164244s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-cd8ffd6fc-56mc2" [5749ff0d-e53d-4098-b4e9-7ce0cdc3d183] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-cd8ffd6fc-56mc2" [5749ff0d-e53d-4098-b4e9-7ce0cdc3d183] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.004346763s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-013873 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-013873 addons disable headlamp --alsologtostderr -v=1: (5.802793315s)
--- PASS: TestAddons/parallel/Headlamp (15.98s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-dc5db94f4-slrhm" [4e55da93-6817-4d84-be62-cbdfc205552a] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004158048s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-013873 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.62s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.02s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-013873 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-013873 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-013873 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-013873 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-013873 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-013873 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-013873 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-013873 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [10fc9789-9f80-4733-be7c-c2b6012d466c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [10fc9789-9f80-4733-be7c-c2b6012d466c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [10fc9789-9f80-4733-be7c-c2b6012d466c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.005232721s
addons_test.go:906: (dbg) Run:  kubectl --context addons-013873 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-arm64 -p addons-013873 ssh "cat /opt/local-path-provisioner/pvc-9b5062b9-3f23-4661-81e5-de97164479e5_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-013873 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-013873 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-013873 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-013873 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.569965971s)
--- PASS: TestAddons/parallel/LocalPath (53.02s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.85s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-lqkqp" [c85dff03-12a5-4f7b-9765-03d9b94ad505] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.006749745s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-013873 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.85s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-25mb8" [8460133c-1d48-4968-9a4b-d85b5d2f4b61] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003793303s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-013873 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-013873 addons disable yakd --alsologtostderr -v=1: (5.89435587s)
--- PASS: TestAddons/parallel/Yakd (11.90s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.18s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-013873
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-013873: (11.91081062s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-013873
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-013873
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-013873
--- PASS: TestAddons/StoppedEnableDisable (12.18s)

                                                
                                    
x
+
TestCertOptions (40.44s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-171060 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-171060 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (37.814902464s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-171060 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-171060 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-171060 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-171060" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-171060
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-171060: (1.959078459s)
--- PASS: TestCertOptions (40.44s)

                                                
                                    
x
+
TestCertExpiration (234.12s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-521962 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-521962 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (43.705953409s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-521962 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-521962 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (8.134682872s)
helpers_test.go:175: Cleaning up "cert-expiration-521962" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-521962
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-521962: (2.27438976s)
--- PASS: TestCertExpiration (234.12s)

                                                
                                    
x
+
TestForceSystemdFlag (45.77s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-287142 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-287142 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (43.078789486s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-287142 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-287142" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-287142
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-287142: (2.315983908s)
--- PASS: TestForceSystemdFlag (45.77s)

                                                
                                    
x
+
TestForceSystemdEnv (38.67s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-786239 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1209 23:09:49.556745    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/addons-013873/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-786239 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (36.226268261s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-786239 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-786239" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-786239
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-786239: (2.074552463s)
--- PASS: TestForceSystemdEnv (38.67s)

                                                
                                    
x
+
TestDockerEnvContainerd (43.14s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-673995 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-673995 --driver=docker  --container-runtime=containerd: (27.587571521s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-673995"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-omVpYDv1zoEB/agent.28311" SSH_AGENT_PID="28312" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-omVpYDv1zoEB/agent.28311" SSH_AGENT_PID="28312" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-omVpYDv1zoEB/agent.28311" SSH_AGENT_PID="28312" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.188188607s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-omVpYDv1zoEB/agent.28311" SSH_AGENT_PID="28312" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-673995" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-673995
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-673995: (1.95145666s)
--- PASS: TestDockerEnvContainerd (43.14s)

                                                
                                    
x
+
TestErrorSpam/setup (30.7s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-578547 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-578547 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-578547 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-578547 --driver=docker  --container-runtime=containerd: (30.694710408s)
--- PASS: TestErrorSpam/setup (30.70s)

                                                
                                    
x
+
TestErrorSpam/start (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-578547 --log_dir /tmp/nospam-578547 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-578547 --log_dir /tmp/nospam-578547 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-578547 --log_dir /tmp/nospam-578547 start --dry-run
--- PASS: TestErrorSpam/start (0.76s)

                                                
                                    
x
+
TestErrorSpam/status (1.15s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-578547 --log_dir /tmp/nospam-578547 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-578547 --log_dir /tmp/nospam-578547 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-578547 --log_dir /tmp/nospam-578547 status
--- PASS: TestErrorSpam/status (1.15s)

                                                
                                    
x
+
TestErrorSpam/pause (1.71s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-578547 --log_dir /tmp/nospam-578547 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-578547 --log_dir /tmp/nospam-578547 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-578547 --log_dir /tmp/nospam-578547 pause
--- PASS: TestErrorSpam/pause (1.71s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.77s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-578547 --log_dir /tmp/nospam-578547 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-578547 --log_dir /tmp/nospam-578547 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-578547 --log_dir /tmp/nospam-578547 unpause
--- PASS: TestErrorSpam/unpause (1.77s)

                                                
                                    
x
+
TestErrorSpam/stop (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-578547 --log_dir /tmp/nospam-578547 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-578547 --log_dir /tmp/nospam-578547 stop: (1.250563139s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-578547 --log_dir /tmp/nospam-578547 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-578547 --log_dir /tmp/nospam-578547 stop
--- PASS: TestErrorSpam/stop (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19888-2244/.minikube/files/etc/test/nested/copy/7684/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (54.01s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-463603 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1209 22:34:49.557374    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/addons-013873/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:34:49.564004    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/addons-013873/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:34:49.575295    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/addons-013873/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:34:49.596607    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/addons-013873/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:34:49.637935    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/addons-013873/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:34:49.719283    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/addons-013873/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:34:49.880688    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/addons-013873/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:34:50.202290    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/addons-013873/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:34:50.843875    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/addons-013873/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:34:52.125164    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/addons-013873/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:34:54.687040    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/addons-013873/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:34:59.809296    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/addons-013873/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:35:10.050871    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/addons-013873/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-463603 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (54.014048752s)
--- PASS: TestFunctional/serial/StartWithProxy (54.01s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.89s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1209 22:35:16.828264    7684 config.go:182] Loaded profile config "functional-463603": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-463603 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-463603 --alsologtostderr -v=8: (6.884049911s)
functional_test.go:663: soft start took 6.887240873s for "functional-463603" cluster.
I1209 22:35:23.713076    7684 config.go:182] Loaded profile config "functional-463603": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (6.89s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-463603 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-463603 cache add registry.k8s.io/pause:3.1: (1.517284315s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-463603 cache add registry.k8s.io/pause:3.3: (1.354407197s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-463603 cache add registry.k8s.io/pause:latest: (1.168266839s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-463603 /tmp/TestFunctionalserialCacheCmdcacheadd_local4062116729/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 cache add minikube-local-cache-test:functional-463603
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 cache delete minikube-local-cache-test:functional-463603
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-463603
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-463603 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (298.907049ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 cache reload
E1209 22:35:30.533081    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/addons-013873/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-463603 cache reload: (1.100337129s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 kubectl -- --context functional-463603 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-463603 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.44s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-463603 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-463603 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.440737713s)
functional_test.go:761: restart took 39.440837467s for "functional-463603" cluster.
I1209 22:36:11.345522    7684 config.go:182] Loaded profile config "functional-463603": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (39.44s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-463603 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.59s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 logs
E1209 22:36:11.495342    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/addons-013873/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-463603 logs: (1.590898175s)
--- PASS: TestFunctional/serial/LogsCmd (1.59s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.6s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 logs --file /tmp/TestFunctionalserialLogsFileCmd4255039340/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-463603 logs --file /tmp/TestFunctionalserialLogsFileCmd4255039340/001/logs.txt: (1.603015644s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.60s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.84s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-463603 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-463603
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-463603: exit status 115 (369.19972ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30742 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-463603 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-463603 delete -f testdata/invalidsvc.yaml: (1.221074237s)
--- PASS: TestFunctional/serial/InvalidService (4.84s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-463603 config get cpus: exit status 14 (81.549958ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-463603 config get cpus: exit status 14 (82.641538ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-463603 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-463603 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 42608: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.26s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-463603 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-463603 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (181.933345ms)

                                                
                                                
-- stdout --
	* [functional-463603] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19888
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19888-2244/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-2244/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 22:36:50.328842   42304 out.go:345] Setting OutFile to fd 1 ...
	I1209 22:36:50.329113   42304 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:36:50.329159   42304 out.go:358] Setting ErrFile to fd 2...
	I1209 22:36:50.329184   42304 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:36:50.329541   42304 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-2244/.minikube/bin
	I1209 22:36:50.330084   42304 out.go:352] Setting JSON to false
	I1209 22:36:50.331385   42304 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":1157,"bootTime":1733782653,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1209 22:36:50.331519   42304 start.go:139] virtualization:  
	I1209 22:36:50.334725   42304 out.go:177] * [functional-463603] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1209 22:36:50.337358   42304 out.go:177]   - MINIKUBE_LOCATION=19888
	I1209 22:36:50.337413   42304 notify.go:220] Checking for updates...
	I1209 22:36:50.339947   42304 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 22:36:50.342552   42304 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19888-2244/kubeconfig
	I1209 22:36:50.345025   42304 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-2244/.minikube
	I1209 22:36:50.347642   42304 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 22:36:50.350292   42304 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 22:36:50.353386   42304 config.go:182] Loaded profile config "functional-463603": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1209 22:36:50.354090   42304 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 22:36:50.380436   42304 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1209 22:36:50.380556   42304 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 22:36:50.442420   42304 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-09 22:36:50.432671207 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1209 22:36:50.442527   42304 docker.go:318] overlay module found
	I1209 22:36:50.445697   42304 out.go:177] * Using the docker driver based on existing profile
	I1209 22:36:50.448446   42304 start.go:297] selected driver: docker
	I1209 22:36:50.448468   42304 start.go:901] validating driver "docker" against &{Name:functional-463603 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-463603 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 22:36:50.448629   42304 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 22:36:50.452006   42304 out.go:201] 
	W1209 22:36:50.454837   42304 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1209 22:36:50.457443   42304 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-463603 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-463603 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-463603 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (206.049367ms)

                                                
                                                
-- stdout --
	* [functional-463603] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19888
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19888-2244/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-2244/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 22:36:50.142253   42257 out.go:345] Setting OutFile to fd 1 ...
	I1209 22:36:50.142468   42257 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:36:50.142481   42257 out.go:358] Setting ErrFile to fd 2...
	I1209 22:36:50.142486   42257 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:36:50.142901   42257 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-2244/.minikube/bin
	I1209 22:36:50.143327   42257 out.go:352] Setting JSON to false
	I1209 22:36:50.144374   42257 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":1157,"bootTime":1733782653,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1209 22:36:50.144456   42257 start.go:139] virtualization:  
	I1209 22:36:50.147728   42257 out.go:177] * [functional-463603] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I1209 22:36:50.151257   42257 out.go:177]   - MINIKUBE_LOCATION=19888
	I1209 22:36:50.151315   42257 notify.go:220] Checking for updates...
	I1209 22:36:50.157037   42257 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 22:36:50.159779   42257 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19888-2244/kubeconfig
	I1209 22:36:50.162593   42257 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-2244/.minikube
	I1209 22:36:50.165344   42257 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 22:36:50.168033   42257 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 22:36:50.171293   42257 config.go:182] Loaded profile config "functional-463603": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1209 22:36:50.171852   42257 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 22:36:50.204231   42257 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1209 22:36:50.204345   42257 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 22:36:50.259764   42257 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-09 22:36:50.250817441 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1209 22:36:50.259872   42257 docker.go:318] overlay module found
	I1209 22:36:50.263161   42257 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1209 22:36:50.265698   42257 start.go:297] selected driver: docker
	I1209 22:36:50.265720   42257 start.go:901] validating driver "docker" against &{Name:functional-463603 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-463603 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 22:36:50.265843   42257 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 22:36:50.269233   42257 out.go:201] 
	W1209 22:36:50.272021   42257 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1209 22:36:50.274677   42257 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-463603 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-463603 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-zk4zw" [85d46907-f29e-48fc-bfe1-c28bf3655077] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-zk4zw" [85d46907-f29e-48fc-bfe1-c28bf3655077] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003812475s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31311
functional_test.go:1675: http://192.168.49.2:31311: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-zk4zw

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31311
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.64s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [30ab5ec8-07e6-45e9-94bb-3c0255d0538e] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004572036s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-463603 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-463603 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-463603 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-463603 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [75d9c68e-a687-4732-bda2-6e41788f2e6c] Pending
helpers_test.go:344: "sp-pod" [75d9c68e-a687-4732-bda2-6e41788f2e6c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [75d9c68e-a687-4732-bda2-6e41788f2e6c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.004691728s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-463603 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-463603 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-463603 delete -f testdata/storage-provisioner/pod.yaml: (1.04424624s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-463603 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [22e62716-c291-4045-8c66-c92ed1c67b04] Pending
helpers_test.go:344: "sp-pod" [22e62716-c291-4045-8c66-c92ed1c67b04] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [22e62716-c291-4045-8c66-c92ed1c67b04] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004245864s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-463603 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.05s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 ssh -n functional-463603 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 cp functional-463603:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd745546602/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 ssh -n functional-463603 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 ssh -n functional-463603 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/7684/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 ssh "sudo cat /etc/test/nested/copy/7684/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/7684.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 ssh "sudo cat /etc/ssl/certs/7684.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/7684.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 ssh "sudo cat /usr/share/ca-certificates/7684.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/76842.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 ssh "sudo cat /etc/ssl/certs/76842.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/76842.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 ssh "sudo cat /usr/share/ca-certificates/76842.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-463603 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-463603 ssh "sudo systemctl is-active docker": exit status 1 (342.64885ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-463603 ssh "sudo systemctl is-active crio": exit status 1 (268.563859ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-463603 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-463603 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-463603 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 39880: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-463603 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-463603 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-463603 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [3008fa7d-ded7-4e30-b121-0ee4ebbeb41d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [3008fa7d-ded7-4e30-b121-0ee4ebbeb41d] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004536525s
I1209 22:36:29.896669    7684 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-463603 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.111.150.175 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-463603 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-463603 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-463603 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-qb98h" [90d5f2be-0a9b-4dee-a694-18e8751adfdb] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-qb98h" [90d5f2be-0a9b-4dee-a694-18e8751adfdb] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.005141121s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "344.8505ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "63.970378ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "461.623679ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "67.688561ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-463603 /tmp/TestFunctionalparallelMountCmdany-port2783002476/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1733783806673324484" to /tmp/TestFunctionalparallelMountCmdany-port2783002476/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1733783806673324484" to /tmp/TestFunctionalparallelMountCmdany-port2783002476/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1733783806673324484" to /tmp/TestFunctionalparallelMountCmdany-port2783002476/001/test-1733783806673324484
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-463603 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (483.946529ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 22:36:47.158172    7684 retry.go:31] will retry after 465.908886ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  9 22:36 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  9 22:36 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  9 22:36 test-1733783806673324484
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 ssh cat /mount-9p/test-1733783806673324484
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-463603 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [1d265e25-2404-4429-9afc-af9e5d7c9b9e] Pending
helpers_test.go:344: "busybox-mount" [1d265e25-2404-4429-9afc-af9e5d7c9b9e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [1d265e25-2404-4429-9afc-af9e5d7c9b9e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [1d265e25-2404-4429-9afc-af9e5d7c9b9e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003623864s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-463603 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-463603 /tmp/TestFunctionalparallelMountCmdany-port2783002476/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 service list -o json
functional_test.go:1494: Took "753.703564ms" to run "out/minikube-linux-arm64 -p functional-463603 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31608
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31608
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-463603 /tmp/TestFunctionalparallelMountCmdspecific-port3212333503/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-463603 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (513.865879ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 22:36:54.589822    7684 retry.go:31] will retry after 698.599906ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-463603 /tmp/TestFunctionalparallelMountCmdspecific-port3212333503/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-463603 ssh "sudo umount -f /mount-9p": exit status 1 (329.715672ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-463603 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-463603 /tmp/TestFunctionalparallelMountCmdspecific-port3212333503/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.52s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-463603 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3956323713/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-463603 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3956323713/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-463603 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3956323713/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-463603 ssh "findmnt -T" /mount1: exit status 1 (930.888447ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 22:36:57.525864    7684 retry.go:31] will retry after 280.89862ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-463603 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-463603 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3956323713/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-463603 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3956323713/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-463603 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3956323713/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.29s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-463603 version -o=json --components: (1.4499058s)
--- PASS: TestFunctional/parallel/Version/components (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-463603 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-463603
docker.io/kindest/kindnetd:v20241108-5c6d2daf
docker.io/kindest/kindnetd:v20241007-36f62932
docker.io/kicbase/echo-server:functional-463603
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-463603 image ls --format short --alsologtostderr:
I1209 22:37:07.635843   45584 out.go:345] Setting OutFile to fd 1 ...
I1209 22:37:07.636099   45584 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 22:37:07.636128   45584 out.go:358] Setting ErrFile to fd 2...
I1209 22:37:07.636147   45584 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 22:37:07.636484   45584 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-2244/.minikube/bin
I1209 22:37:07.637187   45584 config.go:182] Loaded profile config "functional-463603": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1209 22:37:07.637367   45584 config.go:182] Loaded profile config "functional-463603": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1209 22:37:07.637887   45584 cli_runner.go:164] Run: docker container inspect functional-463603 --format={{.State.Status}}
I1209 22:37:07.657880   45584 ssh_runner.go:195] Run: systemctl --version
I1209 22:37:07.658000   45584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-463603
I1209 22:37:07.676465   45584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19888-2244/.minikube/machines/functional-463603/id_rsa Username:docker}
I1209 22:37:07.773504   45584 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-463603 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-463603  | sha256:758d9e | 991B   |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/kube-proxy                  | v1.31.2            | sha256:021d24 | 26.8MB |
| registry.k8s.io/kube-scheduler              | v1.31.2            | sha256:d6b061 | 18.4MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/kindest/kindnetd                  | v20241007-36f62932 | sha256:0bcd66 | 35.3MB |
| docker.io/library/nginx                     | alpine             | sha256:dba92e | 24.3MB |
| docker.io/library/nginx                     | latest             | sha256:bdf62f | 68.5MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:2f6c96 | 16.9MB |
| registry.k8s.io/kube-controller-manager     | v1.31.2            | sha256:9404ae | 23.9MB |
| docker.io/kindest/kindnetd                  | v20241108-5c6d2daf | sha256:2be0bc | 35.3MB |
| docker.io/kicbase/echo-server               | functional-463603  | sha256:ce2d2c | 2.17MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-apiserver              | v1.31.2            | sha256:f9c264 | 25.6MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-463603 image ls --format table --alsologtostderr:
I1209 22:37:08.486612   45797 out.go:345] Setting OutFile to fd 1 ...
I1209 22:37:08.486938   45797 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 22:37:08.486969   45797 out.go:358] Setting ErrFile to fd 2...
I1209 22:37:08.486993   45797 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 22:37:08.487609   45797 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-2244/.minikube/bin
I1209 22:37:08.490751   45797 config.go:182] Loaded profile config "functional-463603": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1209 22:37:08.491041   45797 config.go:182] Loaded profile config "functional-463603": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1209 22:37:08.491786   45797 cli_runner.go:164] Run: docker container inspect functional-463603 --format={{.State.Status}}
I1209 22:37:08.514055   45797 ssh_runner.go:195] Run: systemctl --version
I1209 22:37:08.514116   45797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-463603
I1209 22:37:08.535965   45797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19888-2244/.minikube/machines/functional-463603/id_rsa Username:docker}
I1209 22:37:08.631306   45797 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-463603 image ls --format json --alsologtostderr:
[{"id":"sha256:2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903","repoDigests":["docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"35310383"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:dba92e6b6488643fe4f2e872e6e4f6c30948171890d0f2cb96f28c435352397f","repoDigests":["docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4"],"repoTags":["docker.io/library/nginx:alpine"],"size":"24250568"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","r
epoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:758d9e486c5845cc1296b32bbcd5770c80e682b81ce22ef21fbb7ade09e50d9b","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-463603"],"size":"991"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.
3"],"size":"16948420"},{"id":"sha256:d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"18429679"},{"id":"sha256:9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"23872272"},{"id":"sha256:021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba","repoDigests":["registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"26768683"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f50566
6ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-463603"],"size":"2173567"},{"id":"sha256:0bcd66b03df5f1498fba5b90226939f5993cfba4c8379438bd8e89f3b4a70baa","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"35320503"},{"id":"sha256:bdf62fd3a32f1209270ede068b6e08450dfe125c79b1a8ba8f5685090023bf7f","repoDigests":["docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be"],"repoTags":["docker.io/library/nginx:latest"],"size":"68524740"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.i
o/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"25612805"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-463603 image ls --format json --alsologtostderr:
I1209 22:37:08.177821   45740 out.go:345] Setting OutFile to fd 1 ...
I1209 22:37:08.177976   45740 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 22:37:08.177982   45740 out.go:358] Setting ErrFile to fd 2...
I1209 22:37:08.177988   45740 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 22:37:08.178294   45740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-2244/.minikube/bin
I1209 22:37:08.178990   45740 config.go:182] Loaded profile config "functional-463603": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1209 22:37:08.179113   45740 config.go:182] Loaded profile config "functional-463603": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1209 22:37:08.179574   45740 cli_runner.go:164] Run: docker container inspect functional-463603 --format={{.State.Status}}
I1209 22:37:08.220757   45740 ssh_runner.go:195] Run: systemctl --version
I1209 22:37:08.220813   45740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-463603
I1209 22:37:08.258989   45740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19888-2244/.minikube/machines/functional-463603/id_rsa Username:docker}
I1209 22:37:08.352052   45740 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-463603 image ls --format yaml --alsologtostderr:
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "16948420"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "18429679"
- id: sha256:2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903
repoDigests:
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "35310383"
- id: sha256:758d9e486c5845cc1296b32bbcd5770c80e682b81ce22ef21fbb7ade09e50d9b
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-463603
size: "991"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:dba92e6b6488643fe4f2e872e6e4f6c30948171890d0f2cb96f28c435352397f
repoDigests:
- docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4
repoTags:
- docker.io/library/nginx:alpine
size: "24250568"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:0bcd66b03df5f1498fba5b90226939f5993cfba4c8379438bd8e89f3b4a70baa
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "35320503"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "25612805"
- id: sha256:9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "23872272"
- id: sha256:021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba
repoDigests:
- registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "26768683"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-463603
size: "2173567"
- id: sha256:bdf62fd3a32f1209270ede068b6e08450dfe125c79b1a8ba8f5685090023bf7f
repoDigests:
- docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be
repoTags:
- docker.io/library/nginx:latest
size: "68524740"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-463603 image ls --format yaml --alsologtostderr:
I1209 22:37:07.875042   45652 out.go:345] Setting OutFile to fd 1 ...
I1209 22:37:07.875217   45652 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 22:37:07.875227   45652 out.go:358] Setting ErrFile to fd 2...
I1209 22:37:07.875234   45652 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 22:37:07.875538   45652 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-2244/.minikube/bin
I1209 22:37:07.876244   45652 config.go:182] Loaded profile config "functional-463603": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1209 22:37:07.876401   45652 config.go:182] Loaded profile config "functional-463603": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1209 22:37:07.876908   45652 cli_runner.go:164] Run: docker container inspect functional-463603 --format={{.State.Status}}
I1209 22:37:07.914940   45652 ssh_runner.go:195] Run: systemctl --version
I1209 22:37:07.914991   45652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-463603
I1209 22:37:07.937848   45652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19888-2244/.minikube/machines/functional-463603/id_rsa Username:docker}
I1209 22:37:08.027617   45652 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-463603 ssh pgrep buildkitd: exit status 1 (332.800241ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 image build -t localhost/my-image:functional-463603 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-463603 image build -t localhost/my-image:functional-463603 testdata/build --alsologtostderr: (3.180637945s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-463603 image build -t localhost/my-image:functional-463603 testdata/build --alsologtostderr:
I1209 22:37:08.263587   45747 out.go:345] Setting OutFile to fd 1 ...
I1209 22:37:08.264050   45747 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 22:37:08.264107   45747 out.go:358] Setting ErrFile to fd 2...
I1209 22:37:08.264132   45747 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 22:37:08.264612   45747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-2244/.minikube/bin
I1209 22:37:08.265562   45747 config.go:182] Loaded profile config "functional-463603": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1209 22:37:08.268094   45747 config.go:182] Loaded profile config "functional-463603": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1209 22:37:08.268762   45747 cli_runner.go:164] Run: docker container inspect functional-463603 --format={{.State.Status}}
I1209 22:37:08.295917   45747 ssh_runner.go:195] Run: systemctl --version
I1209 22:37:08.296040   45747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-463603
I1209 22:37:08.316290   45747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19888-2244/.minikube/machines/functional-463603/id_rsa Username:docker}
I1209 22:37:08.408386   45747 build_images.go:161] Building image from path: /tmp/build.900787673.tar
I1209 22:37:08.408478   45747 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1209 22:37:08.418648   45747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.900787673.tar
I1209 22:37:08.423708   45747 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.900787673.tar: stat -c "%s %y" /var/lib/minikube/build/build.900787673.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.900787673.tar': No such file or directory
I1209 22:37:08.423736   45747 ssh_runner.go:362] scp /tmp/build.900787673.tar --> /var/lib/minikube/build/build.900787673.tar (3072 bytes)
I1209 22:37:08.452554   45747 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.900787673
I1209 22:37:08.462196   45747 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.900787673 -xf /var/lib/minikube/build/build.900787673.tar
I1209 22:37:08.471927   45747 containerd.go:394] Building image: /var/lib/minikube/build/build.900787673
I1209 22:37:08.472027   45747 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.900787673 --local dockerfile=/var/lib/minikube/build/build.900787673 --output type=image,name=localhost/my-image:functional-463603
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:41da31fd30b10dc4d7bff04f59e14eb918020c55657f06f4b4255664cea80e9c 0.0s done
#8 exporting config sha256:4e5a7f8bb859679f6d93f955499053312582db4e426a697f3a16a7f3472536ed 0.0s done
#8 naming to localhost/my-image:functional-463603 done
#8 DONE 0.2s
I1209 22:37:11.317285   45747 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.900787673 --local dockerfile=/var/lib/minikube/build/build.900787673 --output type=image,name=localhost/my-image:functional-463603: (2.845233881s)
I1209 22:37:11.317352   45747 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.900787673
I1209 22:37:11.327566   45747 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.900787673.tar
I1209 22:37:11.343598   45747 build_images.go:217] Built localhost/my-image:functional-463603 from /tmp/build.900787673.tar
I1209 22:37:11.343627   45747 build_images.go:133] succeeded building to: functional-463603
I1209 22:37:11.343633   45747 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-463603
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 image load --daemon kicbase/echo-server:functional-463603 --alsologtostderr
2024/12/09 22:37:00 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-463603 image load --daemon kicbase/echo-server:functional-463603 --alsologtostderr: (1.164568969s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 image load --daemon kicbase/echo-server:functional-463603 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-463603 image load --daemon kicbase/echo-server:functional-463603 --alsologtostderr: (1.054517476s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-463603
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 image load --daemon kicbase/echo-server:functional-463603 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-arm64 -p functional-463603 image load --daemon kicbase/echo-server:functional-463603 --alsologtostderr: (1.085749529s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 image save kicbase/echo-server:functional-463603 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 image rm kicbase/echo-server:functional-463603 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-463603
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-463603 image save --daemon kicbase/echo-server:functional-463603 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-463603
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-463603
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-463603
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-463603
--- PASS: TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (113.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-901484 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E1209 22:37:33.417503    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/addons-013873/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-901484 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m52.341514461s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (113.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (43.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-901484 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-901484 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-901484 -- rollout status deployment/busybox: (40.496000753s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-901484 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-901484 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-901484 -- exec busybox-7dff88458-vj8f8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-901484 -- exec busybox-7dff88458-xn6js -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-901484 -- exec busybox-7dff88458-zrhlv -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-901484 -- exec busybox-7dff88458-vj8f8 -- nslookup kubernetes.default
E1209 22:39:49.556133    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/addons-013873/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-901484 -- exec busybox-7dff88458-xn6js -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-901484 -- exec busybox-7dff88458-zrhlv -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-901484 -- exec busybox-7dff88458-vj8f8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-901484 -- exec busybox-7dff88458-xn6js -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-901484 -- exec busybox-7dff88458-zrhlv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (43.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-901484 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-901484 -- exec busybox-7dff88458-vj8f8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-901484 -- exec busybox-7dff88458-vj8f8 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-901484 -- exec busybox-7dff88458-xn6js -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-901484 -- exec busybox-7dff88458-xn6js -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-901484 -- exec busybox-7dff88458-zrhlv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-901484 -- exec busybox-7dff88458-zrhlv -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-901484 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-901484 -v=7 --alsologtostderr: (23.800459168s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 status -v=7 --alsologtostderr
E1209 22:40:17.259159    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/addons-013873/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-901484 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.025334026s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 cp testdata/cp-test.txt ha-901484:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 ssh -n ha-901484 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 cp ha-901484:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile287558922/001/cp-test_ha-901484.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 ssh -n ha-901484 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 cp ha-901484:/home/docker/cp-test.txt ha-901484-m02:/home/docker/cp-test_ha-901484_ha-901484-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 ssh -n ha-901484 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 ssh -n ha-901484-m02 "sudo cat /home/docker/cp-test_ha-901484_ha-901484-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 cp ha-901484:/home/docker/cp-test.txt ha-901484-m03:/home/docker/cp-test_ha-901484_ha-901484-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 ssh -n ha-901484 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 ssh -n ha-901484-m03 "sudo cat /home/docker/cp-test_ha-901484_ha-901484-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 cp ha-901484:/home/docker/cp-test.txt ha-901484-m04:/home/docker/cp-test_ha-901484_ha-901484-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 ssh -n ha-901484 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 ssh -n ha-901484-m04 "sudo cat /home/docker/cp-test_ha-901484_ha-901484-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 cp testdata/cp-test.txt ha-901484-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 ssh -n ha-901484-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 cp ha-901484-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile287558922/001/cp-test_ha-901484-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 ssh -n ha-901484-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 cp ha-901484-m02:/home/docker/cp-test.txt ha-901484:/home/docker/cp-test_ha-901484-m02_ha-901484.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 ssh -n ha-901484-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 ssh -n ha-901484 "sudo cat /home/docker/cp-test_ha-901484-m02_ha-901484.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 cp ha-901484-m02:/home/docker/cp-test.txt ha-901484-m03:/home/docker/cp-test_ha-901484-m02_ha-901484-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 ssh -n ha-901484-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 ssh -n ha-901484-m03 "sudo cat /home/docker/cp-test_ha-901484-m02_ha-901484-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 cp ha-901484-m02:/home/docker/cp-test.txt ha-901484-m04:/home/docker/cp-test_ha-901484-m02_ha-901484-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 ssh -n ha-901484-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 ssh -n ha-901484-m04 "sudo cat /home/docker/cp-test_ha-901484-m02_ha-901484-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 cp testdata/cp-test.txt ha-901484-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 ssh -n ha-901484-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 cp ha-901484-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile287558922/001/cp-test_ha-901484-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 ssh -n ha-901484-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 cp ha-901484-m03:/home/docker/cp-test.txt ha-901484:/home/docker/cp-test_ha-901484-m03_ha-901484.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 ssh -n ha-901484-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 ssh -n ha-901484 "sudo cat /home/docker/cp-test_ha-901484-m03_ha-901484.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 cp ha-901484-m03:/home/docker/cp-test.txt ha-901484-m02:/home/docker/cp-test_ha-901484-m03_ha-901484-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 ssh -n ha-901484-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 ssh -n ha-901484-m02 "sudo cat /home/docker/cp-test_ha-901484-m03_ha-901484-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 cp ha-901484-m03:/home/docker/cp-test.txt ha-901484-m04:/home/docker/cp-test_ha-901484-m03_ha-901484-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 ssh -n ha-901484-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 ssh -n ha-901484-m04 "sudo cat /home/docker/cp-test_ha-901484-m03_ha-901484-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 cp testdata/cp-test.txt ha-901484-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 ssh -n ha-901484-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 cp ha-901484-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile287558922/001/cp-test_ha-901484-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 ssh -n ha-901484-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 cp ha-901484-m04:/home/docker/cp-test.txt ha-901484:/home/docker/cp-test_ha-901484-m04_ha-901484.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 ssh -n ha-901484-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 ssh -n ha-901484 "sudo cat /home/docker/cp-test_ha-901484-m04_ha-901484.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 cp ha-901484-m04:/home/docker/cp-test.txt ha-901484-m02:/home/docker/cp-test_ha-901484-m04_ha-901484-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 ssh -n ha-901484-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 ssh -n ha-901484-m02 "sudo cat /home/docker/cp-test_ha-901484-m04_ha-901484-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 cp ha-901484-m04:/home/docker/cp-test.txt ha-901484-m03:/home/docker/cp-test_ha-901484-m04_ha-901484-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 ssh -n ha-901484-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 ssh -n ha-901484-m03 "sudo cat /home/docker/cp-test_ha-901484-m04_ha-901484-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-901484 node stop m02 -v=7 --alsologtostderr: (12.024171222s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-901484 status -v=7 --alsologtostderr: exit status 7 (700.739633ms)

                                                
                                                
-- stdout --
	ha-901484
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-901484-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-901484-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-901484-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 22:40:49.231210   62068 out.go:345] Setting OutFile to fd 1 ...
	I1209 22:40:49.231412   62068 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:40:49.231439   62068 out.go:358] Setting ErrFile to fd 2...
	I1209 22:40:49.231459   62068 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:40:49.231855   62068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-2244/.minikube/bin
	I1209 22:40:49.232134   62068 out.go:352] Setting JSON to false
	I1209 22:40:49.232181   62068 mustload.go:65] Loading cluster: ha-901484
	I1209 22:40:49.232897   62068 config.go:182] Loaded profile config "ha-901484": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1209 22:40:49.232958   62068 status.go:174] checking status of ha-901484 ...
	I1209 22:40:49.233467   62068 notify.go:220] Checking for updates...
	I1209 22:40:49.233856   62068 cli_runner.go:164] Run: docker container inspect ha-901484 --format={{.State.Status}}
	I1209 22:40:49.255043   62068 status.go:371] ha-901484 host status = "Running" (err=<nil>)
	I1209 22:40:49.255072   62068 host.go:66] Checking if "ha-901484" exists ...
	I1209 22:40:49.255375   62068 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-901484
	I1209 22:40:49.272577   62068 host.go:66] Checking if "ha-901484" exists ...
	I1209 22:40:49.272903   62068 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 22:40:49.272942   62068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-901484
	I1209 22:40:49.291968   62068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/19888-2244/.minikube/machines/ha-901484/id_rsa Username:docker}
	I1209 22:40:49.380790   62068 ssh_runner.go:195] Run: systemctl --version
	I1209 22:40:49.385638   62068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 22:40:49.398187   62068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 22:40:49.459929   62068 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:71 SystemTime:2024-12-09 22:40:49.448788773 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1209 22:40:49.460513   62068 kubeconfig.go:125] found "ha-901484" server: "https://192.168.49.254:8443"
	I1209 22:40:49.460556   62068 api_server.go:166] Checking apiserver status ...
	I1209 22:40:49.460597   62068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 22:40:49.472257   62068 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1427/cgroup
	I1209 22:40:49.482008   62068 api_server.go:182] apiserver freezer: "9:freezer:/docker/ba1f74253bb8cf5a3dd7b5a72b56e3ac86382cb7b3313d2237ea4ddb2e820e38/kubepods/burstable/podd92928403fadf575998a410e5c87b425/50be0049a47e906db69da471b5b4477b0ba75b1203144bcf3421637ee5e718d0"
	I1209 22:40:49.482079   62068 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ba1f74253bb8cf5a3dd7b5a72b56e3ac86382cb7b3313d2237ea4ddb2e820e38/kubepods/burstable/podd92928403fadf575998a410e5c87b425/50be0049a47e906db69da471b5b4477b0ba75b1203144bcf3421637ee5e718d0/freezer.state
	I1209 22:40:49.491045   62068 api_server.go:204] freezer state: "THAWED"
	I1209 22:40:49.491076   62068 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1209 22:40:49.498910   62068 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1209 22:40:49.498937   62068 status.go:463] ha-901484 apiserver status = Running (err=<nil>)
	I1209 22:40:49.498947   62068 status.go:176] ha-901484 status: &{Name:ha-901484 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 22:40:49.498964   62068 status.go:174] checking status of ha-901484-m02 ...
	I1209 22:40:49.499287   62068 cli_runner.go:164] Run: docker container inspect ha-901484-m02 --format={{.State.Status}}
	I1209 22:40:49.517608   62068 status.go:371] ha-901484-m02 host status = "Stopped" (err=<nil>)
	I1209 22:40:49.517632   62068 status.go:384] host is not running, skipping remaining checks
	I1209 22:40:49.517639   62068 status.go:176] ha-901484-m02 status: &{Name:ha-901484-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 22:40:49.517659   62068 status.go:174] checking status of ha-901484-m03 ...
	I1209 22:40:49.517968   62068 cli_runner.go:164] Run: docker container inspect ha-901484-m03 --format={{.State.Status}}
	I1209 22:40:49.536829   62068 status.go:371] ha-901484-m03 host status = "Running" (err=<nil>)
	I1209 22:40:49.536853   62068 host.go:66] Checking if "ha-901484-m03" exists ...
	I1209 22:40:49.537146   62068 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-901484-m03
	I1209 22:40:49.560530   62068 host.go:66] Checking if "ha-901484-m03" exists ...
	I1209 22:40:49.560831   62068 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 22:40:49.560885   62068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-901484-m03
	I1209 22:40:49.581462   62068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19888-2244/.minikube/machines/ha-901484-m03/id_rsa Username:docker}
	I1209 22:40:49.667808   62068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 22:40:49.680466   62068 kubeconfig.go:125] found "ha-901484" server: "https://192.168.49.254:8443"
	I1209 22:40:49.680493   62068 api_server.go:166] Checking apiserver status ...
	I1209 22:40:49.680536   62068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 22:40:49.691374   62068 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1361/cgroup
	I1209 22:40:49.701347   62068 api_server.go:182] apiserver freezer: "9:freezer:/docker/994849995c1c407c14b39836bfc35ec31685d18820a7cff3a9596666a703842a/kubepods/burstable/pod9cd8587d2c5b0a6cf6f580d5e814dc96/a102f2c6b45a447dbbde3ccddcb610ee424c3698478d6216c515866cdd40444e"
	I1209 22:40:49.701415   62068 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/994849995c1c407c14b39836bfc35ec31685d18820a7cff3a9596666a703842a/kubepods/burstable/pod9cd8587d2c5b0a6cf6f580d5e814dc96/a102f2c6b45a447dbbde3ccddcb610ee424c3698478d6216c515866cdd40444e/freezer.state
	I1209 22:40:49.712031   62068 api_server.go:204] freezer state: "THAWED"
	I1209 22:40:49.712068   62068 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1209 22:40:49.719871   62068 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1209 22:40:49.719900   62068 status.go:463] ha-901484-m03 apiserver status = Running (err=<nil>)
	I1209 22:40:49.719916   62068 status.go:176] ha-901484-m03 status: &{Name:ha-901484-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 22:40:49.719972   62068 status.go:174] checking status of ha-901484-m04 ...
	I1209 22:40:49.720279   62068 cli_runner.go:164] Run: docker container inspect ha-901484-m04 --format={{.State.Status}}
	I1209 22:40:49.738858   62068 status.go:371] ha-901484-m04 host status = "Running" (err=<nil>)
	I1209 22:40:49.738882   62068 host.go:66] Checking if "ha-901484-m04" exists ...
	I1209 22:40:49.739165   62068 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-901484-m04
	I1209 22:40:49.756584   62068 host.go:66] Checking if "ha-901484-m04" exists ...
	I1209 22:40:49.756879   62068 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 22:40:49.756930   62068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-901484-m04
	I1209 22:40:49.774490   62068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/19888-2244/.minikube/machines/ha-901484-m04/id_rsa Username:docker}
	I1209 22:40:49.863821   62068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 22:40:49.875696   62068 status.go:176] ha-901484-m04 status: &{Name:ha-901484-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (19.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-901484 node start m02 -v=7 --alsologtostderr: (18.304692956s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-901484 status -v=7 --alsologtostderr: (1.402937274s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (19.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.154911817s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (144.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-901484 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-901484 -v=7 --alsologtostderr
E1209 22:41:20.193467    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/functional-463603/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:41:20.199939    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/functional-463603/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:41:20.211297    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/functional-463603/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:41:20.232687    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/functional-463603/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:41:20.274082    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/functional-463603/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:41:20.355500    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/functional-463603/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:41:20.517001    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/functional-463603/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:41:20.838694    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/functional-463603/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:41:21.480784    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/functional-463603/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:41:22.762122    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/functional-463603/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:41:25.323960    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/functional-463603/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:41:30.445687    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/functional-463603/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:41:40.687053    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/functional-463603/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-901484 -v=7 --alsologtostderr: (36.992237282s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-901484 --wait=true -v=7 --alsologtostderr
E1209 22:42:01.169382    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/functional-463603/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:42:42.130790    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/functional-463603/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-901484 --wait=true -v=7 --alsologtostderr: (1m47.818306326s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-901484
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (144.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-901484 node delete m03 -v=7 --alsologtostderr: (9.94542409s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 stop -v=7 --alsologtostderr
E1209 22:44:04.052917    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/functional-463603/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-901484 stop -v=7 --alsologtostderr: (36.031882117s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-901484 status -v=7 --alsologtostderr: exit status 7 (114.326519ms)

                                                
                                                
-- stdout --
	ha-901484
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-901484-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-901484-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 22:44:24.351403   76367 out.go:345] Setting OutFile to fd 1 ...
	I1209 22:44:24.351577   76367 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:44:24.351607   76367 out.go:358] Setting ErrFile to fd 2...
	I1209 22:44:24.351631   76367 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:44:24.351941   76367 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-2244/.minikube/bin
	I1209 22:44:24.352165   76367 out.go:352] Setting JSON to false
	I1209 22:44:24.352223   76367 mustload.go:65] Loading cluster: ha-901484
	I1209 22:44:24.352299   76367 notify.go:220] Checking for updates...
	I1209 22:44:24.352686   76367 config.go:182] Loaded profile config "ha-901484": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1209 22:44:24.352701   76367 status.go:174] checking status of ha-901484 ...
	I1209 22:44:24.353239   76367 cli_runner.go:164] Run: docker container inspect ha-901484 --format={{.State.Status}}
	I1209 22:44:24.373338   76367 status.go:371] ha-901484 host status = "Stopped" (err=<nil>)
	I1209 22:44:24.373360   76367 status.go:384] host is not running, skipping remaining checks
	I1209 22:44:24.373367   76367 status.go:176] ha-901484 status: &{Name:ha-901484 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 22:44:24.373400   76367 status.go:174] checking status of ha-901484-m02 ...
	I1209 22:44:24.373698   76367 cli_runner.go:164] Run: docker container inspect ha-901484-m02 --format={{.State.Status}}
	I1209 22:44:24.390611   76367 status.go:371] ha-901484-m02 host status = "Stopped" (err=<nil>)
	I1209 22:44:24.390631   76367 status.go:384] host is not running, skipping remaining checks
	I1209 22:44:24.390637   76367 status.go:176] ha-901484-m02 status: &{Name:ha-901484-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 22:44:24.390657   76367 status.go:174] checking status of ha-901484-m04 ...
	I1209 22:44:24.390980   76367 cli_runner.go:164] Run: docker container inspect ha-901484-m04 --format={{.State.Status}}
	I1209 22:44:24.412849   76367 status.go:371] ha-901484-m04 host status = "Stopped" (err=<nil>)
	I1209 22:44:24.412872   76367 status.go:384] host is not running, skipping remaining checks
	I1209 22:44:24.412879   76367 status.go:176] ha-901484-m04 status: &{Name:ha-901484-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (77.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-901484 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E1209 22:44:49.556493    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/addons-013873/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-901484 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m16.67596545s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (77.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (42.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-901484 --control-plane -v=7 --alsologtostderr
E1209 22:46:20.192851    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/functional-463603/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-901484 --control-plane -v=7 --alsologtostderr: (41.118918112s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-901484 status -v=7 --alsologtostderr
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-901484 status -v=7 --alsologtostderr: (1.025969503s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (42.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.024877588s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.03s)

                                                
                                    
x
+
TestJSONOutput/start/Command (50.76s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-964257 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E1209 22:46:47.894347    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/functional-463603/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-964257 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (50.754578386s)
--- PASS: TestJSONOutput/start/Command (50.76s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-964257 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-964257 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.72s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-964257 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-964257 --output=json --user=testUser: (5.723131183s)
--- PASS: TestJSONOutput/stop/Command (5.72s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-138778 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-138778 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (74.712363ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"453b032c-948f-4f9c-acf4-67e2fcfda86b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-138778] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4c4238c3-6e3c-4204-b383-9b9c3471a88b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19888"}}
	{"specversion":"1.0","id":"2bca27fb-2e07-4b79-a521-fc56c7a4d441","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0b01d521-496f-4b1a-a803-d4c8ef0b84ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19888-2244/kubeconfig"}}
	{"specversion":"1.0","id":"bd631d83-1190-4e39-87d0-e47e121a73f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-2244/.minikube"}}
	{"specversion":"1.0","id":"acb94e02-0d8f-4721-8659-30d4db2588f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"255c6918-ba08-4378-831d-7563fff7ff17","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0d19c666-5298-434a-855d-5f07d282ef91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-138778" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-138778
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (41.61s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-315006 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-315006 --network=: (39.450979471s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-315006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-315006
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-315006: (2.133406582s)
--- PASS: TestKicCustomNetwork/create_custom_network (41.61s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.48s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-760821 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-760821 --network=bridge: (33.468575498s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-760821" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-760821
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-760821: (1.98989864s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.48s)

                                                
                                    
x
+
TestKicExistingNetwork (32.81s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1209 22:48:53.963824    7684 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1209 22:48:53.980052    7684 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1209 22:48:53.980154    7684 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1209 22:48:53.980178    7684 cli_runner.go:164] Run: docker network inspect existing-network
W1209 22:48:53.995985    7684 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1209 22:48:53.996014    7684 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1209 22:48:53.996028    7684 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1209 22:48:53.996132    7684 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1209 22:48:54.015238    7684 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-19ca47acc1ca IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:3d:78:e7:22} reservation:<nil>}
I1209 22:48:54.015640    7684 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001d5be10}
I1209 22:48:54.015671    7684 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1209 22:48:54.015727    7684 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1209 22:48:54.090420    7684 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-441026 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-441026 --network=existing-network: (30.709440143s)
helpers_test.go:175: Cleaning up "existing-network-441026" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-441026
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-441026: (1.933791133s)
I1209 22:49:26.752352    7684 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (32.81s)

                                                
                                    
x
+
TestKicCustomSubnet (33.43s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-690576 --subnet=192.168.60.0/24
E1209 22:49:49.556418    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/addons-013873/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-690576 --subnet=192.168.60.0/24: (31.134684337s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-690576 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-690576" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-690576
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-690576: (2.275969048s)
--- PASS: TestKicCustomSubnet (33.43s)

                                                
                                    
x
+
TestKicStaticIP (33.51s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-156102 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-156102 --static-ip=192.168.200.200: (31.301919537s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-156102 ip
helpers_test.go:175: Cleaning up "static-ip-156102" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-156102
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-156102: (2.06561331s)
--- PASS: TestKicStaticIP (33.51s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (67.98s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-315539 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-315539 --driver=docker  --container-runtime=containerd: (28.172985615s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-318148 --driver=docker  --container-runtime=containerd
E1209 22:51:12.620491    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/addons-013873/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:51:20.193346    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/functional-463603/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-318148 --driver=docker  --container-runtime=containerd: (34.263837136s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-315539
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-318148
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-318148" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-318148
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-318148: (2.049482774s)
helpers_test.go:175: Cleaning up "first-315539" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-315539
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-315539: (2.16666627s)
--- PASS: TestMinikubeProfile (67.98s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.61s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-287497 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-287497 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.606142171s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-287497 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.29s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-289335 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-289335 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.290404858s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-289335 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.59s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-287497 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-287497 --alsologtostderr -v=5: (1.593035657s)
--- PASS: TestMountStart/serial/DeleteFirst (1.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-289335 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-289335
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-289335: (1.198455385s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.71s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-289335
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-289335: (6.705692675s)
--- PASS: TestMountStart/serial/RestartStopped (7.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-289335 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (75.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-238199 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-238199 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m14.563656858s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (75.13s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (20.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-238199 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-238199 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-238199 -- rollout status deployment/busybox: (18.296214616s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-238199 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-238199 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-238199 -- exec busybox-7dff88458-92mhc -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-238199 -- exec busybox-7dff88458-zwwj4 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-238199 -- exec busybox-7dff88458-92mhc -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-238199 -- exec busybox-7dff88458-zwwj4 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-238199 -- exec busybox-7dff88458-92mhc -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-238199 -- exec busybox-7dff88458-zwwj4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (20.30s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-238199 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-238199 -- exec busybox-7dff88458-92mhc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-238199 -- exec busybox-7dff88458-92mhc -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-238199 -- exec busybox-7dff88458-zwwj4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-238199 -- exec busybox-7dff88458-zwwj4 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.00s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (16.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-238199 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-238199 -v 3 --alsologtostderr: (16.191965988s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (16.92s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-238199 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 cp testdata/cp-test.txt multinode-238199:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 ssh -n multinode-238199 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 cp multinode-238199:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2443589075/001/cp-test_multinode-238199.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 ssh -n multinode-238199 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 cp multinode-238199:/home/docker/cp-test.txt multinode-238199-m02:/home/docker/cp-test_multinode-238199_multinode-238199-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 ssh -n multinode-238199 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 ssh -n multinode-238199-m02 "sudo cat /home/docker/cp-test_multinode-238199_multinode-238199-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 cp multinode-238199:/home/docker/cp-test.txt multinode-238199-m03:/home/docker/cp-test_multinode-238199_multinode-238199-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 ssh -n multinode-238199 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 ssh -n multinode-238199-m03 "sudo cat /home/docker/cp-test_multinode-238199_multinode-238199-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 cp testdata/cp-test.txt multinode-238199-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 ssh -n multinode-238199-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 cp multinode-238199-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2443589075/001/cp-test_multinode-238199-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 ssh -n multinode-238199-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 cp multinode-238199-m02:/home/docker/cp-test.txt multinode-238199:/home/docker/cp-test_multinode-238199-m02_multinode-238199.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 ssh -n multinode-238199-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 ssh -n multinode-238199 "sudo cat /home/docker/cp-test_multinode-238199-m02_multinode-238199.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 cp multinode-238199-m02:/home/docker/cp-test.txt multinode-238199-m03:/home/docker/cp-test_multinode-238199-m02_multinode-238199-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 ssh -n multinode-238199-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 ssh -n multinode-238199-m03 "sudo cat /home/docker/cp-test_multinode-238199-m02_multinode-238199-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 cp testdata/cp-test.txt multinode-238199-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 ssh -n multinode-238199-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 cp multinode-238199-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2443589075/001/cp-test_multinode-238199-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 ssh -n multinode-238199-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 cp multinode-238199-m03:/home/docker/cp-test.txt multinode-238199:/home/docker/cp-test_multinode-238199-m03_multinode-238199.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 ssh -n multinode-238199-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 ssh -n multinode-238199 "sudo cat /home/docker/cp-test_multinode-238199-m03_multinode-238199.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 cp multinode-238199-m03:/home/docker/cp-test.txt multinode-238199-m02:/home/docker/cp-test_multinode-238199-m03_multinode-238199-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 ssh -n multinode-238199-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 ssh -n multinode-238199-m02 "sudo cat /home/docker/cp-test_multinode-238199-m03_multinode-238199-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.05s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-238199 node stop m03: (1.208436246s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-238199 status: exit status 7 (512.634943ms)

                                                
                                                
-- stdout --
	multinode-238199
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-238199-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-238199-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-238199 status --alsologtostderr: exit status 7 (520.013296ms)

                                                
                                                
-- stdout --
	multinode-238199
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-238199-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-238199-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 22:54:13.931604  129945 out.go:345] Setting OutFile to fd 1 ...
	I1209 22:54:13.931814  129945 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:54:13.931845  129945 out.go:358] Setting ErrFile to fd 2...
	I1209 22:54:13.931872  129945 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:54:13.932232  129945 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-2244/.minikube/bin
	I1209 22:54:13.933030  129945 out.go:352] Setting JSON to false
	I1209 22:54:13.933087  129945 mustload.go:65] Loading cluster: multinode-238199
	I1209 22:54:13.933278  129945 notify.go:220] Checking for updates...
	I1209 22:54:13.933609  129945 config.go:182] Loaded profile config "multinode-238199": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1209 22:54:13.933651  129945 status.go:174] checking status of multinode-238199 ...
	I1209 22:54:13.934490  129945 cli_runner.go:164] Run: docker container inspect multinode-238199 --format={{.State.Status}}
	I1209 22:54:13.953785  129945 status.go:371] multinode-238199 host status = "Running" (err=<nil>)
	I1209 22:54:13.953894  129945 host.go:66] Checking if "multinode-238199" exists ...
	I1209 22:54:13.954199  129945 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-238199
	I1209 22:54:13.980348  129945 host.go:66] Checking if "multinode-238199" exists ...
	I1209 22:54:13.980669  129945 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 22:54:13.980719  129945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-238199
	I1209 22:54:13.999887  129945 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19888-2244/.minikube/machines/multinode-238199/id_rsa Username:docker}
	I1209 22:54:14.088037  129945 ssh_runner.go:195] Run: systemctl --version
	I1209 22:54:14.092722  129945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 22:54:14.105045  129945 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 22:54:14.180265  129945 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-12-09 22:54:14.169895343 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1209 22:54:14.180908  129945 kubeconfig.go:125] found "multinode-238199" server: "https://192.168.67.2:8443"
	I1209 22:54:14.180952  129945 api_server.go:166] Checking apiserver status ...
	I1209 22:54:14.180999  129945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 22:54:14.192473  129945 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1382/cgroup
	I1209 22:54:14.201848  129945 api_server.go:182] apiserver freezer: "9:freezer:/docker/6212ed15db1956b4df33524e80f8f88e5e5bcc38735689166d4bb69236e17488/kubepods/burstable/pod08eecc85732dbd16a5e9493bc9f1111a/ce74c73e658ba6d7d7334a78c834e7b5c3774b07f6868ad8f8a06bdcd2adc156"
	I1209 22:54:14.201922  129945 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/6212ed15db1956b4df33524e80f8f88e5e5bcc38735689166d4bb69236e17488/kubepods/burstable/pod08eecc85732dbd16a5e9493bc9f1111a/ce74c73e658ba6d7d7334a78c834e7b5c3774b07f6868ad8f8a06bdcd2adc156/freezer.state
	I1209 22:54:14.210857  129945 api_server.go:204] freezer state: "THAWED"
	I1209 22:54:14.210884  129945 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1209 22:54:14.218494  129945 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1209 22:54:14.218521  129945 status.go:463] multinode-238199 apiserver status = Running (err=<nil>)
	I1209 22:54:14.218531  129945 status.go:176] multinode-238199 status: &{Name:multinode-238199 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 22:54:14.218550  129945 status.go:174] checking status of multinode-238199-m02 ...
	I1209 22:54:14.218892  129945 cli_runner.go:164] Run: docker container inspect multinode-238199-m02 --format={{.State.Status}}
	I1209 22:54:14.235928  129945 status.go:371] multinode-238199-m02 host status = "Running" (err=<nil>)
	I1209 22:54:14.235954  129945 host.go:66] Checking if "multinode-238199-m02" exists ...
	I1209 22:54:14.236243  129945 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-238199-m02
	I1209 22:54:14.252217  129945 host.go:66] Checking if "multinode-238199-m02" exists ...
	I1209 22:54:14.252520  129945 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 22:54:14.252574  129945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-238199-m02
	I1209 22:54:14.269773  129945 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/19888-2244/.minikube/machines/multinode-238199-m02/id_rsa Username:docker}
	I1209 22:54:14.356869  129945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 22:54:14.368306  129945 status.go:176] multinode-238199-m02 status: &{Name:multinode-238199-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1209 22:54:14.368354  129945 status.go:174] checking status of multinode-238199-m03 ...
	I1209 22:54:14.368675  129945 cli_runner.go:164] Run: docker container inspect multinode-238199-m03 --format={{.State.Status}}
	I1209 22:54:14.389972  129945 status.go:371] multinode-238199-m03 host status = "Stopped" (err=<nil>)
	I1209 22:54:14.389996  129945 status.go:384] host is not running, skipping remaining checks
	I1209 22:54:14.390003  129945 status.go:176] multinode-238199-m03 status: &{Name:multinode-238199-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-238199 node start m03 -v=7 --alsologtostderr: (8.49374362s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.22s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (81.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-238199
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-238199
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-238199: (24.813051906s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-238199 --wait=true -v=8 --alsologtostderr
E1209 22:54:49.556621    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/addons-013873/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-238199 --wait=true -v=8 --alsologtostderr: (56.979539933s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-238199
--- PASS: TestMultiNode/serial/RestartKeepsNodes (81.92s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-238199 node delete m03: (4.622267455s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-238199 stop: (23.703167532s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-238199 status: exit status 7 (98.515836ms)

                                                
                                                
-- stdout --
	multinode-238199
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-238199-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-238199 status --alsologtostderr: exit status 7 (101.268423ms)

                                                
                                                
-- stdout --
	multinode-238199
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-238199-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 22:56:14.664841  137921 out.go:345] Setting OutFile to fd 1 ...
	I1209 22:56:14.664956  137921 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:56:14.664966  137921 out.go:358] Setting ErrFile to fd 2...
	I1209 22:56:14.664972  137921 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:56:14.665213  137921 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-2244/.minikube/bin
	I1209 22:56:14.665413  137921 out.go:352] Setting JSON to false
	I1209 22:56:14.665441  137921 mustload.go:65] Loading cluster: multinode-238199
	I1209 22:56:14.665539  137921 notify.go:220] Checking for updates...
	I1209 22:56:14.665892  137921 config.go:182] Loaded profile config "multinode-238199": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1209 22:56:14.665906  137921 status.go:174] checking status of multinode-238199 ...
	I1209 22:56:14.666795  137921 cli_runner.go:164] Run: docker container inspect multinode-238199 --format={{.State.Status}}
	I1209 22:56:14.685134  137921 status.go:371] multinode-238199 host status = "Stopped" (err=<nil>)
	I1209 22:56:14.685157  137921 status.go:384] host is not running, skipping remaining checks
	I1209 22:56:14.685164  137921 status.go:176] multinode-238199 status: &{Name:multinode-238199 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 22:56:14.685192  137921 status.go:174] checking status of multinode-238199-m02 ...
	I1209 22:56:14.685487  137921 cli_runner.go:164] Run: docker container inspect multinode-238199-m02 --format={{.State.Status}}
	I1209 22:56:14.708157  137921 status.go:371] multinode-238199-m02 host status = "Stopped" (err=<nil>)
	I1209 22:56:14.708181  137921 status.go:384] host is not running, skipping remaining checks
	I1209 22:56:14.708188  137921 status.go:176] multinode-238199-m02 status: &{Name:multinode-238199-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.90s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (53.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-238199 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E1209 22:56:20.192632    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/functional-463603/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-238199 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (52.880078014s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-238199 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (53.57s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-238199
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-238199-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-238199-m02 --driver=docker  --container-runtime=containerd: exit status 14 (95.799446ms)

                                                
                                                
-- stdout --
	* [multinode-238199-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19888
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19888-2244/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-2244/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-238199-m02' is duplicated with machine name 'multinode-238199-m02' in profile 'multinode-238199'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-238199-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-238199-m03 --driver=docker  --container-runtime=containerd: (33.71061845s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-238199
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-238199: exit status 80 (359.171746ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-238199 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-238199-m03 already exists in multinode-238199-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-238199-m03
E1209 22:57:43.256175    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/functional-463603/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-238199-m03: (1.973964956s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.21s)

                                                
                                    
x
+
TestPreload (111.53s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-406447 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-406447 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m12.556777507s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-406447 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-406447 image pull gcr.io/k8s-minikube/busybox: (1.950974806s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-406447
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-406447: (11.973469628s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-406447 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-406447 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (22.313233496s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-406447 image list
helpers_test.go:175: Cleaning up "test-preload-406447" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-406447
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-406447: (2.398989216s)
--- PASS: TestPreload (111.53s)

                                                
                                    
x
+
TestScheduledStopUnix (105.27s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-488059 --memory=2048 --driver=docker  --container-runtime=containerd
E1209 22:59:49.556500    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/addons-013873/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-488059 --memory=2048 --driver=docker  --container-runtime=containerd: (29.395854368s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-488059 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-488059 -n scheduled-stop-488059
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-488059 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1209 23:00:09.867996    7684 retry.go:31] will retry after 62.505µs: open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/scheduled-stop-488059/pid: no such file or directory
I1209 23:00:09.869159    7684 retry.go:31] will retry after 132.812µs: open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/scheduled-stop-488059/pid: no such file or directory
I1209 23:00:09.869570    7684 retry.go:31] will retry after 163.704µs: open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/scheduled-stop-488059/pid: no such file or directory
I1209 23:00:09.870274    7684 retry.go:31] will retry after 367.988µs: open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/scheduled-stop-488059/pid: no such file or directory
I1209 23:00:09.871364    7684 retry.go:31] will retry after 335.328µs: open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/scheduled-stop-488059/pid: no such file or directory
I1209 23:00:09.872512    7684 retry.go:31] will retry after 853.061µs: open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/scheduled-stop-488059/pid: no such file or directory
I1209 23:00:09.873636    7684 retry.go:31] will retry after 1.620342ms: open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/scheduled-stop-488059/pid: no such file or directory
I1209 23:00:09.875851    7684 retry.go:31] will retry after 2.514162ms: open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/scheduled-stop-488059/pid: no such file or directory
I1209 23:00:09.879350    7684 retry.go:31] will retry after 1.60218ms: open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/scheduled-stop-488059/pid: no such file or directory
I1209 23:00:09.883102    7684 retry.go:31] will retry after 2.60534ms: open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/scheduled-stop-488059/pid: no such file or directory
I1209 23:00:09.885940    7684 retry.go:31] will retry after 7.797157ms: open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/scheduled-stop-488059/pid: no such file or directory
I1209 23:00:09.894206    7684 retry.go:31] will retry after 4.914334ms: open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/scheduled-stop-488059/pid: no such file or directory
I1209 23:00:09.899700    7684 retry.go:31] will retry after 14.625472ms: open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/scheduled-stop-488059/pid: no such file or directory
I1209 23:00:09.914967    7684 retry.go:31] will retry after 16.635307ms: open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/scheduled-stop-488059/pid: no such file or directory
I1209 23:00:09.932222    7684 retry.go:31] will retry after 15.086777ms: open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/scheduled-stop-488059/pid: no such file or directory
I1209 23:00:09.947466    7684 retry.go:31] will retry after 63.123992ms: open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/scheduled-stop-488059/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-488059 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-488059 -n scheduled-stop-488059
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-488059
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-488059 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1209 23:01:20.193111    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/functional-463603/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-488059
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-488059: exit status 7 (73.119332ms)

                                                
                                                
-- stdout --
	scheduled-stop-488059
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-488059 -n scheduled-stop-488059
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-488059 -n scheduled-stop-488059: exit status 7 (73.881162ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-488059" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-488059
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-488059: (4.256699744s)
--- PASS: TestScheduledStopUnix (105.27s)

                                                
                                    
x
+
TestInsufficientStorage (10.57s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-074926 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-074926 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.104666487s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a04881d9-2b75-4578-9bdf-d3f76f80cbcb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-074926] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"005d8eeb-c433-42e0-a466-d1b2afef4163","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19888"}}
	{"specversion":"1.0","id":"1458da71-376d-40d2-a4b7-f93171e3aef9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0aa3ed3f-b29c-48ad-91b6-ea93de092e6f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19888-2244/kubeconfig"}}
	{"specversion":"1.0","id":"ddddc9fc-7590-47d4-b092-c3c89f27df6f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-2244/.minikube"}}
	{"specversion":"1.0","id":"eff6f116-3977-49da-b315-fdee364c4ac0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"313848d7-46b8-40e9-8942-f0dc5bfc2f15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"fbb2d748-ef40-4466-bb9b-72d2996217ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"34d15e73-26c6-432c-a2c0-5d452c706a38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"40357108-5000-45b6-a628-b0967ea654a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1029610b-3fe2-4e82-95b9-fbe2759be048","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"02f51065-2bb4-4cfb-aa11-9c2da61e4fc0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-074926\" primary control-plane node in \"insufficient-storage-074926\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"5e1150f8-f366-4c6e-be9a-bebdb9449eaf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1730888964-19917 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"8d7431a6-757c-47c4-934c-17f1a061f22a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"d677e72b-16c0-4c66-a11c-4b045ab8e52c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-074926 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-074926 --output=json --layout=cluster: exit status 7 (290.001597ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-074926","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-074926","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 23:01:33.580723  156544 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-074926" does not appear in /home/jenkins/minikube-integration/19888-2244/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-074926 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-074926 --output=json --layout=cluster: exit status 7 (273.709818ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-074926","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-074926","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 23:01:33.855651  156605 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-074926" does not appear in /home/jenkins/minikube-integration/19888-2244/kubeconfig
	E1209 23:01:33.866314  156605 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/insufficient-storage-074926/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-074926" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-074926
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-074926: (1.903075346s)
--- PASS: TestInsufficientStorage (10.57s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (94.89s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.841376820 start -p running-upgrade-947915 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E1209 23:07:52.622753    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/addons-013873/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.841376820 start -p running-upgrade-947915 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (47.81265677s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-947915 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-947915 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (43.125054734s)
helpers_test.go:175: Cleaning up "running-upgrade-947915" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-947915
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-947915: (3.228430745s)
--- PASS: TestRunningBinaryUpgrade (94.89s)

                                                
                                    
x
+
TestKubernetesUpgrade (360.75s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-352146 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-352146 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m0.01486762s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-352146
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-352146: (1.421688115s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-352146 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-352146 status --format={{.Host}}: exit status 7 (87.510205ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-352146 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-352146 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m46.679638834s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-352146 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-352146 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-352146 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (121.928133ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-352146] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19888
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19888-2244/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-2244/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-352146
	    minikube start -p kubernetes-upgrade-352146 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3521462 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-352146 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-352146 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-352146 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (9.777685938s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-352146" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-352146
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-352146: (2.518289201s)
--- PASS: TestKubernetesUpgrade (360.75s)

                                                
                                    
x
+
TestMissingContainerUpgrade (173.72s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3436499933 start -p missing-upgrade-393560 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3436499933 start -p missing-upgrade-393560 --memory=2200 --driver=docker  --container-runtime=containerd: (1m36.455725323s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-393560
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-393560: (10.287038391s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-393560
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-393560 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-393560 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m3.484718837s)
helpers_test.go:175: Cleaning up "missing-upgrade-393560" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-393560
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-393560: (2.732871904s)
--- PASS: TestMissingContainerUpgrade (173.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-768666 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-768666 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (87.013888ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-768666] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19888
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19888-2244/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-2244/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-768666 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-768666 --driver=docker  --container-runtime=containerd: (38.410064593s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-768666 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-768666 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-768666 --no-kubernetes --driver=docker  --container-runtime=containerd: (15.361617366s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-768666 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-768666 status -o json: exit status 2 (285.363755ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-768666","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-768666
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-768666: (2.216430601s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-768666 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-768666 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.213899668s)
--- PASS: TestNoKubernetes/serial/Start (6.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-768666 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-768666 "sudo systemctl is-active --quiet service kubelet": exit status 1 (273.564692ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-768666
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-768666: (1.208932946s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-768666 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-768666 --driver=docker  --container-runtime=containerd: (6.588912769s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-768666 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-768666 "sudo systemctl is-active --quiet service kubelet": exit status 1 (379.246438ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (159.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3118692443 start -p stopped-upgrade-675635 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E1209 23:04:49.556586    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/addons-013873/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3118692443 start -p stopped-upgrade-675635 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (47.639717265s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3118692443 -p stopped-upgrade-675635 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3118692443 -p stopped-upgrade-675635 stop: (19.891368441s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-675635 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1209 23:06:20.192514    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/functional-463603/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-675635 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m32.056658597s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (159.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-675635
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-675635: (1.471089346s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.47s)

                                                
                                    
x
+
TestPause/serial/Start (69.93s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-147832 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-147832 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m9.924950346s)
--- PASS: TestPause/serial/Start (69.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-312894 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-312894 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (173.235521ms)

                                                
                                                
-- stdout --
	* [false-312894] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19888
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19888-2244/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-2244/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 23:09:40.306533  196508 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:09:40.306758  196508 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:09:40.306793  196508 out.go:358] Setting ErrFile to fd 2...
	I1209 23:09:40.306814  196508 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:09:40.307211  196508 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-2244/.minikube/bin
	I1209 23:09:40.307767  196508 out.go:352] Setting JSON to false
	I1209 23:09:40.308977  196508 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3127,"bootTime":1733782653,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1209 23:09:40.309058  196508 start.go:139] virtualization:  
	I1209 23:09:40.311397  196508 out.go:177] * [false-312894] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1209 23:09:40.313143  196508 out.go:177]   - MINIKUBE_LOCATION=19888
	I1209 23:09:40.313214  196508 notify.go:220] Checking for updates...
	I1209 23:09:40.316597  196508 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 23:09:40.317963  196508 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19888-2244/kubeconfig
	I1209 23:09:40.319642  196508 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-2244/.minikube
	I1209 23:09:40.320999  196508 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 23:09:40.322202  196508 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 23:09:40.324006  196508 config.go:182] Loaded profile config "pause-147832": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1209 23:09:40.324115  196508 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 23:09:40.345705  196508 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1209 23:09:40.345840  196508 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 23:09:40.416172  196508 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-09 23:09:40.406809953 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1209 23:09:40.416289  196508 docker.go:318] overlay module found
	I1209 23:09:40.417963  196508 out.go:177] * Using the docker driver based on user configuration
	I1209 23:09:40.419534  196508 start.go:297] selected driver: docker
	I1209 23:09:40.419554  196508 start.go:901] validating driver "docker" against <nil>
	I1209 23:09:40.419569  196508 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 23:09:40.421848  196508 out.go:201] 
	W1209 23:09:40.423161  196508 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1209 23:09:40.424679  196508 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-312894 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-312894

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-312894

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-312894

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-312894

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-312894

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-312894

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-312894

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-312894

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-312894

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-312894

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312894"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312894"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312894"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-312894

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312894"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312894"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-312894" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-312894" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-312894" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-312894" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-312894" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-312894" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-312894" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-312894" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312894"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312894"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312894"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312894"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312894"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-312894" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-312894" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-312894" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312894"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312894"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312894"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312894"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312894"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19888-2244/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 09 Dec 2024 23:09:31 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-147832
contexts:
- context:
cluster: pause-147832
extensions:
- extension:
last-update: Mon, 09 Dec 2024 23:09:31 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-147832
name: pause-147832
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-147832
user:
client-certificate: /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/pause-147832/client.crt
client-key: /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/pause-147832/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-312894

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312894"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312894"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312894"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312894"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312894"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312894"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312894"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312894"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312894"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312894"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312894"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312894"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312894"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312894"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312894"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312894"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312894"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-312894"

                                                
                                                
----------------------- debugLogs end: false-312894 [took: 3.439383851s] --------------------------------
helpers_test.go:175: Cleaning up "false-312894" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-312894
--- PASS: TestNetworkPlugins/group/false (3.79s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.83s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-147832 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-147832 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.799877946s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.83s)

                                                
                                    
x
+
TestPause/serial/Pause (0.91s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-147832 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.91s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-147832 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-147832 --output=json --layout=cluster: exit status 2 (400.940413ms)

                                                
                                                
-- stdout --
	{"Name":"pause-147832","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-147832","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.40s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.86s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-147832 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.86s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.1s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-147832 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-147832 --alsologtostderr -v=5: (1.09786847s)
--- PASS: TestPause/serial/PauseAgain (1.10s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.8s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-147832 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-147832 --alsologtostderr -v=5: (2.795857052s)
--- PASS: TestPause/serial/DeletePaused (2.80s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.43s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-147832
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-147832: exit status 1 (22.777467ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-147832: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (161.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-098617 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E1209 23:11:20.193359    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/functional-463603/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-098617 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m41.14937514s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (161.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-098617 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [636c9f1f-710e-444b-82d7-94d93b58ad27] Pending
helpers_test.go:344: "busybox" [636c9f1f-710e-444b-82d7-94d93b58ad27] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [636c9f1f-710e-444b-82d7-94d93b58ad27] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.00362386s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-098617 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-098617 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-098617 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.295664421s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-098617 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-098617 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-098617 --alsologtostderr -v=3: (12.549259102s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (73.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-548785 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-548785 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (1m13.453890587s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (73.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-098617 -n old-k8s-version-098617
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-098617 -n old-k8s-version-098617: exit status 7 (102.83009ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-098617 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-548785 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [29b0c71a-525c-4d62-99db-d2ae3a4195fb] Pending
helpers_test.go:344: "busybox" [29b0c71a-525c-4d62-99db-d2ae3a4195fb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [29b0c71a-525c-4d62-99db-d2ae3a4195fb] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004734357s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-548785 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-548785 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-548785 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.166626305s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-548785 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-548785 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-548785 --alsologtostderr -v=3: (12.053542962s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-548785 -n no-preload-548785
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-548785 -n no-preload-548785: exit status 7 (81.442546ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-548785 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (267.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-548785 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
E1209 23:16:20.193550    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/functional-463603/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:19:49.556436    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/addons-013873/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-548785 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (4m27.163878321s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-548785 -n no-preload-548785
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (267.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-mnpb9" [e6717671-1f61-4c82-b4e0-f9062581b22d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003491695s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-mnpb9" [e6717671-1f61-4c82-b4e0-f9062581b22d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004535716s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-548785 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-548785 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-548785 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-548785 -n no-preload-548785
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-548785 -n no-preload-548785: exit status 2 (382.483047ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-548785 -n no-preload-548785
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-548785 -n no-preload-548785: exit status 2 (334.505267ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-548785 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-548785 -n no-preload-548785
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-548785 -n no-preload-548785
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (64.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-744076 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-744076 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (1m4.636584196s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (64.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-9w2zp" [3a9fa3fa-a298-44fe-8a1f-434eef723bc4] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005493973s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-9w2zp" [3a9fa3fa-a298-44fe-8a1f-434eef723bc4] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005832507s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-098617 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-098617 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-098617 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-098617 --alsologtostderr -v=1: (1.020698981s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-098617 -n old-k8s-version-098617
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-098617 -n old-k8s-version-098617: exit status 2 (420.882792ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-098617 -n old-k8s-version-098617
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-098617 -n old-k8s-version-098617: exit status 2 (395.327545ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-098617 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-098617 -n old-k8s-version-098617
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-098617 -n old-k8s-version-098617
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (63.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-473580 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
E1209 23:21:20.193417    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/functional-463603/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-473580 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (1m3.654864743s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (63.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-744076 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2595bb4f-26f8-460d-bb94-e93700b8261f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2595bb4f-26f8-460d-bb94-e93700b8261f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.00416112s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-744076 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-744076 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-744076 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.017355888s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-744076 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-744076 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-744076 --alsologtostderr -v=3: (12.060176784s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-744076 -n embed-certs-744076
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-744076 -n embed-certs-744076: exit status 7 (110.272895ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-744076 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (266.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-744076 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-744076 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (4m25.80115837s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-744076 -n embed-certs-744076
E1209 23:26:20.192871    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/functional-463603/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (266.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-473580 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ffd6d426-ded0-4ffa-8933-5e69035a6c0f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ffd6d426-ded0-4ffa-8933-5e69035a6c0f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003029425s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-473580 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-473580 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-473580 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.611265548s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-473580 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-473580 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-473580 --alsologtostderr -v=3: (12.386691907s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-473580 -n default-k8s-diff-port-473580
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-473580 -n default-k8s-diff-port-473580: exit status 7 (85.85687ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-473580 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (291.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-473580 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
E1209 23:23:48.754820    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/old-k8s-version-098617/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:23:48.761144    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/old-k8s-version-098617/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:23:48.772493    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/old-k8s-version-098617/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:23:48.793940    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/old-k8s-version-098617/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:23:48.835329    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/old-k8s-version-098617/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:23:48.916779    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/old-k8s-version-098617/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:23:49.078226    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/old-k8s-version-098617/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:23:49.399985    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/old-k8s-version-098617/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:23:50.042129    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/old-k8s-version-098617/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:23:51.323946    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/old-k8s-version-098617/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:23:53.885686    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/old-k8s-version-098617/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:23:59.007370    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/old-k8s-version-098617/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:24:09.250067    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/old-k8s-version-098617/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:24:29.731670    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/old-k8s-version-098617/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:24:32.624517    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/addons-013873/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:24:49.556442    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/addons-013873/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:25:10.693864    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/old-k8s-version-098617/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:25:20.996826    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/no-preload-548785/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:25:21.003402    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/no-preload-548785/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:25:21.014832    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/no-preload-548785/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:25:21.036266    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/no-preload-548785/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:25:21.077708    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/no-preload-548785/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:25:21.159154    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/no-preload-548785/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:25:21.320599    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/no-preload-548785/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:25:21.642176    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/no-preload-548785/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:25:22.284019    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/no-preload-548785/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:25:23.565816    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/no-preload-548785/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:25:26.127105    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/no-preload-548785/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:25:31.249193    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/no-preload-548785/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:25:41.491043    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/no-preload-548785/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:26:01.973416    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/no-preload-548785/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-473580 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (4m51.094102974s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-473580 -n default-k8s-diff-port-473580
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (291.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-fhp25" [2672d68d-bb93-4d81-aca8-0a12866b44b1] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00576678s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-fhp25" [2672d68d-bb93-4d81-aca8-0a12866b44b1] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004871773s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-744076 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-744076 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-744076 --alsologtostderr -v=1
E1209 23:26:32.615736    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/old-k8s-version-098617/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-744076 -n embed-certs-744076
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-744076 -n embed-certs-744076: exit status 2 (317.078785ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-744076 -n embed-certs-744076
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-744076 -n embed-certs-744076: exit status 2 (344.217988ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-744076 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-744076 -n embed-certs-744076
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-744076 -n embed-certs-744076
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (35.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-529674 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
E1209 23:26:42.935040    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/no-preload-548785/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-529674 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (35.459200451s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (35.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-zch4z" [90966bf7-ecf7-4967-943b-235fc7a9dbd5] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006519936s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-529674 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-529674 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.364871472s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-529674 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-529674 --alsologtostderr -v=3: (1.273065496s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-529674 -n newest-cni-529674
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-529674 -n newest-cni-529674: exit status 7 (74.824933ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-529674 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (21.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-529674 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-529674 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (21.55430602s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-529674 -n newest-cni-529674
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (21.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-zch4z" [90966bf7-ecf7-4967-943b-235fc7a9dbd5] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008329557s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-473580 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-473580 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-473580 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-473580 --alsologtostderr -v=1: (1.311898782s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-473580 -n default-k8s-diff-port-473580
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-473580 -n default-k8s-diff-port-473580: exit status 2 (495.167667ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-473580 -n default-k8s-diff-port-473580
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-473580 -n default-k8s-diff-port-473580: exit status 2 (454.048241ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-473580 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-473580 --alsologtostderr -v=1: (1.165992812s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-473580 -n default-k8s-diff-port-473580
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-473580 -n default-k8s-diff-port-473580
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (73.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-312894 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-312894 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m13.506730201s)
--- PASS: TestNetworkPlugins/group/auto/Start (73.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-529674 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-529674 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-529674 -n newest-cni-529674
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-529674 -n newest-cni-529674: exit status 2 (482.891152ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-529674 -n newest-cni-529674
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-529674 -n newest-cni-529674: exit status 2 (724.888061ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-529674 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p newest-cni-529674 --alsologtostderr -v=1: (1.018295309s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-529674 -n newest-cni-529674
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-529674 -n newest-cni-529674
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.21s)
E1209 23:33:18.706539    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/default-k8s-diff-port-473580/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:33:45.561823    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/kindnet-312894/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:33:45.568255    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/kindnet-312894/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:33:45.579771    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/kindnet-312894/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:33:45.601206    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/kindnet-312894/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:33:45.642759    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/kindnet-312894/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:33:45.724256    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/kindnet-312894/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:33:45.885843    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/kindnet-312894/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:33:46.207469    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/kindnet-312894/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:33:46.849484    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/kindnet-312894/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:33:47.401516    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/auto-312894/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:33:47.407985    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/auto-312894/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:33:47.419439    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/auto-312894/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:33:47.440884    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/auto-312894/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:33:47.482232    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/auto-312894/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:33:47.563637    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/auto-312894/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:33:47.725153    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/auto-312894/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:33:48.046885    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/auto-312894/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:33:48.131311    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/kindnet-312894/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:33:48.688141    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/auto-312894/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:33:48.754603    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/old-k8s-version-098617/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:33:49.969933    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/auto-312894/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:33:50.692674    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/kindnet-312894/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:33:52.531375    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/auto-312894/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:33:55.814171    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/kindnet-312894/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:33:57.653062    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/auto-312894/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (59.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-312894 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E1209 23:28:04.856905    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/no-preload-548785/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-312894 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (59.266868303s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (59.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-6txgb" [91ff1696-84f6-4f7a-b31f-02405345c0c8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004305358s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-312894 "pgrep -a kubelet"
I1209 23:28:47.121672    7684 config.go:182] Loaded profile config "auto-312894": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-312894 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-dfffb" [a387032b-f8e4-41d9-ae2e-1271546e20da] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1209 23:28:48.755310    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/old-k8s-version-098617/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-dfffb" [a387032b-f8e4-41d9-ae2e-1271546e20da] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.005886904s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-312894 "pgrep -a kubelet"
I1209 23:28:51.847577    7684 config.go:182] Loaded profile config "kindnet-312894": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-312894 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mq7t7" [e0d02a7a-a00b-45c8-86c3-57d3a2b8fb52] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-mq7t7" [e0d02a7a-a00b-45c8-86c3-57d3a2b8fb52] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.022273929s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-312894 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-312894 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-312894 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-312894 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-312894 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-312894 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (71.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-312894 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-312894 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m11.582438226s)
--- PASS: TestNetworkPlugins/group/calico/Start (71.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (59.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-312894 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E1209 23:29:49.556724    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/addons-013873/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:30:20.996189    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/no-preload-548785/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-312894 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (59.332620335s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (59.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-312894 "pgrep -a kubelet"
I1209 23:30:26.709333    7684 config.go:182] Loaded profile config "custom-flannel-312894": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-312894 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-2f8t7" [ec5027db-a71c-4590-9c86-36bd581ca5a9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-2f8t7" [ec5027db-a71c-4590-9c86-36bd581ca5a9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004533956s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-zjsnd" [837cfd9d-d332-4b69-8310-0f1e4f4887c7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00481649s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-312894 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-312894 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-312894 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-312894 "pgrep -a kubelet"
I1209 23:30:38.958502    7684 config.go:182] Loaded profile config "calico-312894": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-312894 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nssdp" [6c07d7be-7cc0-49d3-b650-3321bad902fa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-nssdp" [6c07d7be-7cc0-49d3-b650-3321bad902fa] Running
E1209 23:30:48.698863    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/no-preload-548785/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004353948s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-312894 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-312894 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-312894 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (76.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-312894 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E1209 23:31:03.259727    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/functional-463603/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-312894 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m16.413867334s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (76.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (56.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-312894 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E1209 23:31:20.193008    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/functional-463603/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:31:56.768947    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/default-k8s-diff-port-473580/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:31:56.775326    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/default-k8s-diff-port-473580/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:31:56.786689    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/default-k8s-diff-port-473580/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:31:56.808052    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/default-k8s-diff-port-473580/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:31:56.849369    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/default-k8s-diff-port-473580/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:31:56.930782    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/default-k8s-diff-port-473580/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:31:57.092365    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/default-k8s-diff-port-473580/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:31:57.413937    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/default-k8s-diff-port-473580/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:31:58.055985    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/default-k8s-diff-port-473580/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:31:59.337511    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/default-k8s-diff-port-473580/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:32:01.898869    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/default-k8s-diff-port-473580/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:32:07.020520    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/default-k8s-diff-port-473580/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-312894 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (56.529976825s)
--- PASS: TestNetworkPlugins/group/flannel/Start (56.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-mtqnb" [c4a07141-9a0e-48bf-8dbb-375871ff2bb0] Running
E1209 23:32:17.262008    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/default-k8s-diff-port-473580/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004202163s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-312894 "pgrep -a kubelet"
I1209 23:32:17.731779    7684 config.go:182] Loaded profile config "enable-default-cni-312894": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-312894 replace --force -f testdata/netcat-deployment.yaml
I1209 23:32:18.027077    7684 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-58fzc" [46d8f9d8-f20f-47df-a4ba-b6a366a33dbf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-58fzc" [46d8f9d8-f20f-47df-a4ba-b6a366a33dbf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003833856s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-312894 "pgrep -a kubelet"
I1209 23:32:19.396979    7684 config.go:182] Loaded profile config "flannel-312894": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-312894 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gfkh5" [a906b540-34d5-41f4-8e38-2f84cab67014] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gfkh5" [a906b540-34d5-41f4-8e38-2f84cab67014] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004211901s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-312894 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-312894 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-312894 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-312894 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-312894 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-312894 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (69.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-312894 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-312894 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m9.987959941s)
--- PASS: TestNetworkPlugins/group/bridge/Start (69.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-312894 "pgrep -a kubelet"
I1209 23:34:04.113029    7684 config.go:182] Loaded profile config "bridge-312894": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-312894 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-tfrlj" [a9e75c4f-213a-4f2f-b1a0-1026654d176e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1209 23:34:06.056205    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/kindnet-312894/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-tfrlj" [a9e75c4f-213a-4f2f-b1a0-1026654d176e] Running
E1209 23:34:07.894379    7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/auto-312894/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003510087s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-312894 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-312894 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-312894 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (29/330)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.55s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-687325 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-687325" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-687325
--- SKIP: TestDownloadOnlyKic (0.55s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-956061" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-956061
--- SKIP: TestStartStop/group/disable-driver-mounts (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-312894 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-312894

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-312894

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-312894

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-312894

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-312894

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-312894

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-312894

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-312894

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-312894

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-312894

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312894"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312894"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312894"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-312894

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312894"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312894"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-312894" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-312894" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-312894" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-312894" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-312894" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-312894" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-312894" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-312894" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312894"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312894"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312894"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312894"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312894"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-312894" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-312894" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-312894" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312894"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312894"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312894"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312894"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312894"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19888-2244/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 09 Dec 2024 23:09:31 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-147832
contexts:
- context:
cluster: pause-147832
extensions:
- extension:
last-update: Mon, 09 Dec 2024 23:09:31 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-147832
name: pause-147832
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-147832
user:
client-certificate: /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/pause-147832/client.crt
client-key: /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/pause-147832/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-312894

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312894"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312894"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312894"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312894"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312894"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312894"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312894"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312894"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312894"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312894"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312894"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312894"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312894"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312894"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312894"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312894"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312894"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-312894"

                                                
                                                
----------------------- debugLogs end: kubenet-312894 [took: 3.407770174s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-312894" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-312894
--- SKIP: TestNetworkPlugins/group/kubenet (3.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-312894 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-312894

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-312894

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-312894

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-312894

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-312894

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-312894

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-312894

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-312894

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-312894

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-312894

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312894"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312894"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312894"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-312894

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312894"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312894"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-312894" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-312894" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-312894" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-312894" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-312894" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-312894" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-312894" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-312894" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312894"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312894"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312894"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312894"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312894"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-312894

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-312894

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-312894" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-312894" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-312894

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-312894

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-312894" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-312894" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-312894" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-312894" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-312894" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312894"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312894"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312894"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312894"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312894"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19888-2244/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 09 Dec 2024 23:09:31 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-147832
contexts:
- context:
cluster: pause-147832
extensions:
- extension:
last-update: Mon, 09 Dec 2024 23:09:31 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-147832
name: pause-147832
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-147832
user:
client-certificate: /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/pause-147832/client.crt
client-key: /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/pause-147832/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-312894

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312894"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312894"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312894"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312894"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312894"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312894"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312894"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312894"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312894"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312894"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312894"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312894"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312894"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312894"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312894"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312894"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312894"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-312894" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312894"

                                                
                                                
----------------------- debugLogs end: cilium-312894 [took: 3.928995361s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-312894" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-312894
--- SKIP: TestNetworkPlugins/group/cilium (4.10s)

                                                
                                    
Copied to clipboard