Test Report: Docker_Linux_containerd_arm64 19876

                    
                      0db15b506654906b6081fade5258c34c52419f7c:2024-10-28:36841
                    
                

Test fail (1/330)

Order failed test Duration
304 TestStartStop/group/old-k8s-version/serial/SecondStart 378.69
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (378.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-674802 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-674802 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m14.629231054s)

                                                
                                                
-- stdout --
	* [old-k8s-version-674802] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19876-1313708/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-1313708/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-674802" primary control-plane node in "old-k8s-version-674802" cluster
	* Pulling base image v0.0.45-1729876044-19868 ...
	* Restarting existing docker container for "old-k8s-version-674802" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-674802 addons enable metrics-server
	
	* Enabled addons: metrics-server, dashboard, default-storageclass, storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 11:26:42.771937 1522650 out.go:345] Setting OutFile to fd 1 ...
	I1028 11:26:42.772148 1522650 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:26:42.772170 1522650 out.go:358] Setting ErrFile to fd 2...
	I1028 11:26:42.772190 1522650 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:26:42.772442 1522650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-1313708/.minikube/bin
	I1028 11:26:42.772804 1522650 out.go:352] Setting JSON to false
	I1028 11:26:42.773742 1522650 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":148133,"bootTime":1729966670,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1028 11:26:42.773824 1522650 start.go:139] virtualization:  
	I1028 11:26:42.775903 1522650 out.go:177] * [old-k8s-version-674802] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1028 11:26:42.777844 1522650 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 11:26:42.777922 1522650 notify.go:220] Checking for updates...
	I1028 11:26:42.780590 1522650 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 11:26:42.782162 1522650 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19876-1313708/kubeconfig
	I1028 11:26:42.783422 1522650 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-1313708/.minikube
	I1028 11:26:42.785052 1522650 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1028 11:26:42.786538 1522650 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 11:26:42.788272 1522650 config.go:182] Loaded profile config "old-k8s-version-674802": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1028 11:26:42.790388 1522650 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1028 11:26:42.791746 1522650 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 11:26:42.826546 1522650 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1028 11:26:42.826668 1522650 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 11:26:42.909259 1522650 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:67 SystemTime:2024-10-28 11:26:42.894056944 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1028 11:26:42.909364 1522650 docker.go:318] overlay module found
	I1028 11:26:42.911725 1522650 out.go:177] * Using the docker driver based on existing profile
	I1028 11:26:42.912868 1522650 start.go:297] selected driver: docker
	I1028 11:26:42.912881 1522650 start.go:901] validating driver "docker" against &{Name:old-k8s-version-674802 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-674802 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:26:42.912995 1522650 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 11:26:42.913658 1522650 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 11:26:42.982880 1522650 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:67 SystemTime:2024-10-28 11:26:42.973864869 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1028 11:26:42.983203 1522650 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:26:42.983225 1522650 cni.go:84] Creating CNI manager for ""
	I1028 11:26:42.983280 1522650 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1028 11:26:42.983316 1522650 start.go:340] cluster config:
	{Name:old-k8s-version-674802 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-674802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:26:42.986761 1522650 out.go:177] * Starting "old-k8s-version-674802" primary control-plane node in "old-k8s-version-674802" cluster
	I1028 11:26:42.987946 1522650 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1028 11:26:42.989288 1522650 out.go:177] * Pulling base image v0.0.45-1729876044-19868 ...
	I1028 11:26:42.990516 1522650 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1028 11:26:42.990559 1522650 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19876-1313708/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I1028 11:26:42.990581 1522650 cache.go:56] Caching tarball of preloaded images
	I1028 11:26:42.990655 1522650 preload.go:172] Found /home/jenkins/minikube-integration/19876-1313708/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 11:26:42.990664 1522650 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I1028 11:26:42.990772 1522650 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/old-k8s-version-674802/config.json ...
	I1028 11:26:42.990956 1522650 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local docker daemon
	I1028 11:26:43.008673 1522650 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local docker daemon, skipping pull
	I1028 11:26:43.008692 1522650 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e exists in daemon, skipping load
	I1028 11:26:43.008704 1522650 cache.go:194] Successfully downloaded all kic artifacts
	I1028 11:26:43.008729 1522650 start.go:360] acquireMachinesLock for old-k8s-version-674802: {Name:mkbd322987ec66edb2ef5f7245f402a1adfd92d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 11:26:43.008781 1522650 start.go:364] duration metric: took 32.296µs to acquireMachinesLock for "old-k8s-version-674802"
	I1028 11:26:43.008800 1522650 start.go:96] Skipping create...Using existing machine configuration
	I1028 11:26:43.008805 1522650 fix.go:54] fixHost starting: 
	I1028 11:26:43.009039 1522650 cli_runner.go:164] Run: docker container inspect old-k8s-version-674802 --format={{.State.Status}}
	I1028 11:26:43.030951 1522650 fix.go:112] recreateIfNeeded on old-k8s-version-674802: state=Stopped err=<nil>
	W1028 11:26:43.030978 1522650 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 11:26:43.034311 1522650 out.go:177] * Restarting existing docker container for "old-k8s-version-674802" ...
	I1028 11:26:43.037687 1522650 cli_runner.go:164] Run: docker start old-k8s-version-674802
	I1028 11:26:43.426029 1522650 cli_runner.go:164] Run: docker container inspect old-k8s-version-674802 --format={{.State.Status}}
	I1028 11:26:43.457446 1522650 kic.go:430] container "old-k8s-version-674802" state is running.
	I1028 11:26:43.457823 1522650 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-674802
	I1028 11:26:43.486493 1522650 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/old-k8s-version-674802/config.json ...
	I1028 11:26:43.486721 1522650 machine.go:93] provisionDockerMachine start ...
	I1028 11:26:43.486788 1522650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-674802
	I1028 11:26:43.523688 1522650 main.go:141] libmachine: Using SSH client type: native
	I1028 11:26:43.523989 1522650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil>  [] 0s} 127.0.0.1 40375 <nil> <nil>}
	I1028 11:26:43.524000 1522650 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 11:26:43.524530 1522650 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35226->127.0.0.1:40375: read: connection reset by peer
	I1028 11:26:46.655020 1522650 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-674802
	
	I1028 11:26:46.655046 1522650 ubuntu.go:169] provisioning hostname "old-k8s-version-674802"
	I1028 11:26:46.655112 1522650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-674802
	I1028 11:26:46.681319 1522650 main.go:141] libmachine: Using SSH client type: native
	I1028 11:26:46.681565 1522650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil>  [] 0s} 127.0.0.1 40375 <nil> <nil>}
	I1028 11:26:46.681584 1522650 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-674802 && echo "old-k8s-version-674802" | sudo tee /etc/hostname
	I1028 11:26:46.820451 1522650 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-674802
	
	I1028 11:26:46.820626 1522650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-674802
	I1028 11:26:46.845262 1522650 main.go:141] libmachine: Using SSH client type: native
	I1028 11:26:46.845508 1522650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil>  [] 0s} 127.0.0.1 40375 <nil> <nil>}
	I1028 11:26:46.845529 1522650 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-674802' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-674802/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-674802' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 11:26:46.975581 1522650 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:26:46.975679 1522650 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19876-1313708/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-1313708/.minikube}
	I1028 11:26:46.975745 1522650 ubuntu.go:177] setting up certificates
	I1028 11:26:46.975770 1522650 provision.go:84] configureAuth start
	I1028 11:26:46.975868 1522650 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-674802
	I1028 11:26:46.996543 1522650 provision.go:143] copyHostCerts
	I1028 11:26:46.996611 1522650 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-1313708/.minikube/ca.pem, removing ...
	I1028 11:26:46.996628 1522650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-1313708/.minikube/ca.pem
	I1028 11:26:46.996692 1522650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-1313708/.minikube/ca.pem (1078 bytes)
	I1028 11:26:46.996790 1522650 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-1313708/.minikube/cert.pem, removing ...
	I1028 11:26:46.996801 1522650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-1313708/.minikube/cert.pem
	I1028 11:26:46.996828 1522650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-1313708/.minikube/cert.pem (1123 bytes)
	I1028 11:26:46.996890 1522650 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-1313708/.minikube/key.pem, removing ...
	I1028 11:26:46.996899 1522650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-1313708/.minikube/key.pem
	I1028 11:26:46.996923 1522650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-1313708/.minikube/key.pem (1675 bytes)
	I1028 11:26:46.997007 1522650 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-1313708/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-1313708/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-1313708/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-674802 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-674802]
	I1028 11:26:47.963184 1522650 provision.go:177] copyRemoteCerts
	I1028 11:26:47.963312 1522650 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 11:26:47.963394 1522650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-674802
	I1028 11:26:47.989822 1522650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40375 SSHKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/old-k8s-version-674802/id_rsa Username:docker}
	I1028 11:26:48.105538 1522650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 11:26:48.174450 1522650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1028 11:26:48.222007 1522650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 11:26:48.249562 1522650 provision.go:87] duration metric: took 1.273759551s to configureAuth
	I1028 11:26:48.249586 1522650 ubuntu.go:193] setting minikube options for container-runtime
	I1028 11:26:48.249779 1522650 config.go:182] Loaded profile config "old-k8s-version-674802": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1028 11:26:48.249786 1522650 machine.go:96] duration metric: took 4.763050927s to provisionDockerMachine
	I1028 11:26:48.249795 1522650 start.go:293] postStartSetup for "old-k8s-version-674802" (driver="docker")
	I1028 11:26:48.249805 1522650 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 11:26:48.249852 1522650 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 11:26:48.249893 1522650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-674802
	I1028 11:26:48.284861 1522650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40375 SSHKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/old-k8s-version-674802/id_rsa Username:docker}
	I1028 11:26:48.410216 1522650 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 11:26:48.413760 1522650 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1028 11:26:48.413794 1522650 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1028 11:26:48.413805 1522650 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1028 11:26:48.413812 1522650 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1028 11:26:48.413823 1522650 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-1313708/.minikube/addons for local assets ...
	I1028 11:26:48.413887 1522650 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-1313708/.minikube/files for local assets ...
	I1028 11:26:48.413966 1522650 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-1313708/.minikube/files/etc/ssl/certs/13190982.pem -> 13190982.pem in /etc/ssl/certs
	I1028 11:26:48.414069 1522650 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 11:26:48.425595 1522650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/files/etc/ssl/certs/13190982.pem --> /etc/ssl/certs/13190982.pem (1708 bytes)
	I1028 11:26:48.455134 1522650 start.go:296] duration metric: took 205.324048ms for postStartSetup
	I1028 11:26:48.455216 1522650 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1028 11:26:48.455258 1522650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-674802
	I1028 11:26:48.499944 1522650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40375 SSHKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/old-k8s-version-674802/id_rsa Username:docker}
	I1028 11:26:48.611336 1522650 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1028 11:26:48.616018 1522650 fix.go:56] duration metric: took 5.607204491s for fixHost
	I1028 11:26:48.616046 1522650 start.go:83] releasing machines lock for "old-k8s-version-674802", held for 5.607256528s
	I1028 11:26:48.616117 1522650 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-674802
	I1028 11:26:48.647044 1522650 ssh_runner.go:195] Run: cat /version.json
	I1028 11:26:48.647221 1522650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-674802
	I1028 11:26:48.647119 1522650 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 11:26:48.647364 1522650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-674802
	I1028 11:26:48.682005 1522650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40375 SSHKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/old-k8s-version-674802/id_rsa Username:docker}
	I1028 11:26:48.691790 1522650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40375 SSHKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/old-k8s-version-674802/id_rsa Username:docker}
	I1028 11:26:48.799830 1522650 ssh_runner.go:195] Run: systemctl --version
	I1028 11:26:48.998974 1522650 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1028 11:26:49.017206 1522650 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1028 11:26:49.040056 1522650 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1028 11:26:49.040132 1522650 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 11:26:49.050488 1522650 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1028 11:26:49.050513 1522650 start.go:495] detecting cgroup driver to use...
	I1028 11:26:49.050546 1522650 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1028 11:26:49.050599 1522650 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1028 11:26:49.067138 1522650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1028 11:26:49.096300 1522650 docker.go:217] disabling cri-docker service (if available) ...
	I1028 11:26:49.096370 1522650 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 11:26:49.124021 1522650 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 11:26:49.147013 1522650 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 11:26:49.325061 1522650 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 11:26:49.445234 1522650 docker.go:233] disabling docker service ...
	I1028 11:26:49.445306 1522650 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 11:26:49.462268 1522650 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 11:26:49.476297 1522650 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 11:26:49.598692 1522650 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 11:26:49.721139 1522650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 11:26:49.734266 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:26:49.756510 1522650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1028 11:26:49.766487 1522650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1028 11:26:49.776538 1522650 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1028 11:26:49.776607 1522650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1028 11:26:49.786532 1522650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 11:26:49.797244 1522650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1028 11:26:49.806851 1522650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 11:26:49.816860 1522650 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 11:26:49.826074 1522650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1028 11:26:49.835894 1522650 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 11:26:49.844849 1522650 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 11:26:49.853785 1522650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:26:49.958801 1522650 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1028 11:26:50.175979 1522650 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1028 11:26:50.176063 1522650 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1028 11:26:50.179880 1522650 start.go:563] Will wait 60s for crictl version
	I1028 11:26:50.179946 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:26:50.188477 1522650 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 11:26:50.234137 1522650 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I1028 11:26:50.234214 1522650 ssh_runner.go:195] Run: containerd --version
	I1028 11:26:50.267336 1522650 ssh_runner.go:195] Run: containerd --version
	I1028 11:26:50.313755 1522650 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	I1028 11:26:50.315009 1522650 cli_runner.go:164] Run: docker network inspect old-k8s-version-674802 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1028 11:26:50.339895 1522650 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1028 11:26:50.343773 1522650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:26:50.362127 1522650 kubeadm.go:883] updating cluster {Name:old-k8s-version-674802 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-674802 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 11:26:50.362231 1522650 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1028 11:26:50.362288 1522650 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 11:26:50.460278 1522650 containerd.go:627] all images are preloaded for containerd runtime.
	I1028 11:26:50.460305 1522650 containerd.go:534] Images already preloaded, skipping extraction
	I1028 11:26:50.460364 1522650 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 11:26:50.506419 1522650 containerd.go:627] all images are preloaded for containerd runtime.
	I1028 11:26:50.506449 1522650 cache_images.go:84] Images are preloaded, skipping loading
	I1028 11:26:50.506458 1522650 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
	I1028 11:26:50.506614 1522650 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-674802 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-674802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 11:26:50.506702 1522650 ssh_runner.go:195] Run: sudo crictl info
	I1028 11:26:50.557105 1522650 cni.go:84] Creating CNI manager for ""
	I1028 11:26:50.557130 1522650 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1028 11:26:50.557139 1522650 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 11:26:50.557160 1522650 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-674802 NodeName:old-k8s-version-674802 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1028 11:26:50.557285 1522650 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-674802"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 11:26:50.557351 1522650 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1028 11:26:50.566873 1522650 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 11:26:50.566984 1522650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 11:26:50.575993 1522650 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I1028 11:26:50.595920 1522650 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 11:26:50.615030 1522650 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I1028 11:26:50.634097 1522650 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1028 11:26:50.637774 1522650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:26:50.670452 1522650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:26:50.789010 1522650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:26:50.803937 1522650 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/old-k8s-version-674802 for IP: 192.168.76.2
	I1028 11:26:50.803959 1522650 certs.go:194] generating shared ca certs ...
	I1028 11:26:50.803975 1522650 certs.go:226] acquiring lock for ca certs: {Name:mk0d3ceca6221298cea760035b38b9c704e7b693 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:26:50.804101 1522650 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-1313708/.minikube/ca.key
	I1028 11:26:50.804145 1522650 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-1313708/.minikube/proxy-client-ca.key
	I1028 11:26:50.804159 1522650 certs.go:256] generating profile certs ...
	I1028 11:26:50.804241 1522650 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/old-k8s-version-674802/client.key
	I1028 11:26:50.804309 1522650 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/old-k8s-version-674802/apiserver.key.bd2ec1af
	I1028 11:26:50.804352 1522650 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/old-k8s-version-674802/proxy-client.key
	I1028 11:26:50.804465 1522650 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/1319098.pem (1338 bytes)
	W1028 11:26:50.804499 1522650 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/1319098_empty.pem, impossibly tiny 0 bytes
	I1028 11:26:50.804507 1522650 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 11:26:50.804531 1522650 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/ca.pem (1078 bytes)
	I1028 11:26:50.804553 1522650 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/cert.pem (1123 bytes)
	I1028 11:26:50.804573 1522650 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/key.pem (1675 bytes)
	I1028 11:26:50.804617 1522650 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-1313708/.minikube/files/etc/ssl/certs/13190982.pem (1708 bytes)
	I1028 11:26:50.805228 1522650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 11:26:50.832202 1522650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 11:26:50.902360 1522650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 11:26:50.941743 1522650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 11:26:50.989445 1522650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/old-k8s-version-674802/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1028 11:26:51.044784 1522650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/old-k8s-version-674802/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 11:26:51.079320 1522650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/old-k8s-version-674802/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 11:26:51.105341 1522650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/old-k8s-version-674802/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 11:26:51.151280 1522650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/files/etc/ssl/certs/13190982.pem --> /usr/share/ca-certificates/13190982.pem (1708 bytes)
	I1028 11:26:51.184932 1522650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 11:26:51.214347 1522650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/1319098.pem --> /usr/share/ca-certificates/1319098.pem (1338 bytes)
	I1028 11:26:51.238859 1522650 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 11:26:51.257470 1522650 ssh_runner.go:195] Run: openssl version
	I1028 11:26:51.263613 1522650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13190982.pem && ln -fs /usr/share/ca-certificates/13190982.pem /etc/ssl/certs/13190982.pem"
	I1028 11:26:51.273432 1522650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13190982.pem
	I1028 11:26:51.277389 1522650 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 10:48 /usr/share/ca-certificates/13190982.pem
	I1028 11:26:51.277510 1522650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13190982.pem
	I1028 11:26:51.285038 1522650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13190982.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 11:26:51.294408 1522650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 11:26:51.304498 1522650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:26:51.308461 1522650 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:41 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:26:51.308583 1522650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:26:51.316187 1522650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 11:26:51.326008 1522650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1319098.pem && ln -fs /usr/share/ca-certificates/1319098.pem /etc/ssl/certs/1319098.pem"
	I1028 11:26:51.335282 1522650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1319098.pem
	I1028 11:26:51.339841 1522650 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 10:48 /usr/share/ca-certificates/1319098.pem
	I1028 11:26:51.339910 1522650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1319098.pem
	I1028 11:26:51.347851 1522650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1319098.pem /etc/ssl/certs/51391683.0"
	I1028 11:26:51.357263 1522650 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:26:51.361661 1522650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 11:26:51.369429 1522650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 11:26:51.377289 1522650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 11:26:51.384634 1522650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 11:26:51.392607 1522650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 11:26:51.400147 1522650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 11:26:51.408288 1522650 kubeadm.go:392] StartCluster: {Name:old-k8s-version-674802 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-674802 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:26:51.408412 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1028 11:26:51.408510 1522650 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 11:26:51.468025 1522650 cri.go:89] found id: "2a9df06520f732f1766508da84b61f745cb047b5f7bcf5bf3ef9cb3891f6239f"
	I1028 11:26:51.468049 1522650 cri.go:89] found id: "120e0085c59b7ce7fd3c7afbb14ea7637d4c18b660f3d35631be06f9007e3a33"
	I1028 11:26:51.468054 1522650 cri.go:89] found id: "794cbb23bfba6dd5b645283f7c87ee46bd33b5c5728a364d13fbce246d0811a5"
	I1028 11:26:51.468059 1522650 cri.go:89] found id: "8d4b3dad3dd90f3ec833f354de4e8225bdaf07199d5245c988b1fdbc527c1015"
	I1028 11:26:51.468062 1522650 cri.go:89] found id: "4937ca78533bbe1e9024be3e8c38035f4fb621e9cfcd8ef6fc974857b5f788d7"
	I1028 11:26:51.468066 1522650 cri.go:89] found id: "ba54ab63823c2fcfe3e9bc95fca852e480e0d8fae4071a23e1fc38d3e74384cc"
	I1028 11:26:51.468070 1522650 cri.go:89] found id: "01a108b46e6f4f9217c1f90a9611bdbc7956ad16edbfd8093ad46cc6ef34b232"
	I1028 11:26:51.468073 1522650 cri.go:89] found id: "857580d96023ba113555b54f38493703fce44c36a25523c35c1fd07c51eee056"
	I1028 11:26:51.468078 1522650 cri.go:89] found id: ""
	I1028 11:26:51.468143 1522650 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1028 11:26:51.481164 1522650 cri.go:116] JSON = null
	W1028 11:26:51.481216 1522650 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I1028 11:26:51.481275 1522650 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 11:26:51.491812 1522650 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 11:26:51.491832 1522650 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 11:26:51.491882 1522650 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 11:26:51.500270 1522650 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 11:26:51.500843 1522650 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-674802" does not appear in /home/jenkins/minikube-integration/19876-1313708/kubeconfig
	I1028 11:26:51.501125 1522650 kubeconfig.go:62] /home/jenkins/minikube-integration/19876-1313708/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-674802" cluster setting kubeconfig missing "old-k8s-version-674802" context setting]
	I1028 11:26:51.501526 1522650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-1313708/kubeconfig: {Name:mk63efc7fcbbc1d4439be659e836c582c1d1641a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:26:51.503108 1522650 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 11:26:51.512702 1522650 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I1028 11:26:51.512747 1522650 kubeadm.go:597] duration metric: took 20.899746ms to restartPrimaryControlPlane
	I1028 11:26:51.512781 1522650 kubeadm.go:394] duration metric: took 104.501349ms to StartCluster
	I1028 11:26:51.512798 1522650 settings.go:142] acquiring lock: {Name:mk753f039bf116e385865ce8de020c5ca21e9c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:26:51.512884 1522650 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19876-1313708/kubeconfig
	I1028 11:26:51.513567 1522650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-1313708/kubeconfig: {Name:mk63efc7fcbbc1d4439be659e836c582c1d1641a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:26:51.513849 1522650 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1028 11:26:51.514088 1522650 config.go:182] Loaded profile config "old-k8s-version-674802": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1028 11:26:51.514202 1522650 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 11:26:51.514494 1522650 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-674802"
	I1028 11:26:51.514511 1522650 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-674802"
	W1028 11:26:51.514523 1522650 addons.go:243] addon storage-provisioner should already be in state true
	I1028 11:26:51.514553 1522650 host.go:66] Checking if "old-k8s-version-674802" exists ...
	I1028 11:26:51.515061 1522650 cli_runner.go:164] Run: docker container inspect old-k8s-version-674802 --format={{.State.Status}}
	I1028 11:26:51.515345 1522650 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-674802"
	I1028 11:26:51.515387 1522650 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-674802"
	I1028 11:26:51.515768 1522650 cli_runner.go:164] Run: docker container inspect old-k8s-version-674802 --format={{.State.Status}}
	I1028 11:26:51.516270 1522650 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-674802"
	I1028 11:26:51.516295 1522650 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-674802"
	W1028 11:26:51.516316 1522650 addons.go:243] addon metrics-server should already be in state true
	I1028 11:26:51.516354 1522650 host.go:66] Checking if "old-k8s-version-674802" exists ...
	I1028 11:26:51.516874 1522650 cli_runner.go:164] Run: docker container inspect old-k8s-version-674802 --format={{.State.Status}}
	I1028 11:26:51.518952 1522650 addons.go:69] Setting dashboard=true in profile "old-k8s-version-674802"
	I1028 11:26:51.518991 1522650 addons.go:234] Setting addon dashboard=true in "old-k8s-version-674802"
	W1028 11:26:51.519109 1522650 addons.go:243] addon dashboard should already be in state true
	I1028 11:26:51.519180 1522650 host.go:66] Checking if "old-k8s-version-674802" exists ...
	I1028 11:26:51.519419 1522650 out.go:177] * Verifying Kubernetes components...
	I1028 11:26:51.521384 1522650 cli_runner.go:164] Run: docker container inspect old-k8s-version-674802 --format={{.State.Status}}
	I1028 11:26:51.522915 1522650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:26:51.554843 1522650 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 11:26:51.557304 1522650 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:26:51.557330 1522650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 11:26:51.557392 1522650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-674802
	I1028 11:26:51.569474 1522650 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1028 11:26:51.575747 1522650 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 11:26:51.575774 1522650 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 11:26:51.575846 1522650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-674802
	I1028 11:26:51.603148 1522650 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-674802"
	W1028 11:26:51.603173 1522650 addons.go:243] addon default-storageclass should already be in state true
	I1028 11:26:51.603202 1522650 host.go:66] Checking if "old-k8s-version-674802" exists ...
	I1028 11:26:51.603636 1522650 cli_runner.go:164] Run: docker container inspect old-k8s-version-674802 --format={{.State.Status}}
	I1028 11:26:51.615737 1522650 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1028 11:26:51.619808 1522650 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1028 11:26:51.622191 1522650 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1028 11:26:51.622216 1522650 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1028 11:26:51.622293 1522650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-674802
	I1028 11:26:51.646206 1522650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40375 SSHKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/old-k8s-version-674802/id_rsa Username:docker}
	I1028 11:26:51.654133 1522650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40375 SSHKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/old-k8s-version-674802/id_rsa Username:docker}
	I1028 11:26:51.665394 1522650 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 11:26:51.665415 1522650 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 11:26:51.665475 1522650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-674802
	I1028 11:26:51.685383 1522650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40375 SSHKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/old-k8s-version-674802/id_rsa Username:docker}
	I1028 11:26:51.705591 1522650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40375 SSHKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/old-k8s-version-674802/id_rsa Username:docker}
	I1028 11:26:51.734351 1522650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:26:51.792846 1522650 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-674802" to be "Ready" ...
	I1028 11:26:51.845870 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:26:51.911602 1522650 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 11:26:51.911657 1522650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1028 11:26:51.953990 1522650 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1028 11:26:51.954015 1522650 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1028 11:26:51.961134 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 11:26:52.021552 1522650 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 11:26:52.021618 1522650 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 11:26:52.028608 1522650 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1028 11:26:52.028695 1522650 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1028 11:26:52.123324 1522650 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1028 11:26:52.123389 1522650 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1028 11:26:52.130057 1522650 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 11:26:52.130127 1522650 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	W1028 11:26:52.140363 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:52.140498 1522650 retry.go:31] will retry after 253.713134ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:52.186755 1522650 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1028 11:26:52.186826 1522650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1028 11:26:52.212159 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1028 11:26:52.231048 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:52.231133 1522650 retry.go:31] will retry after 208.640397ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:52.235574 1522650 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1028 11:26:52.235657 1522650 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1028 11:26:52.257737 1522650 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1028 11:26:52.257812 1522650 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1028 11:26:52.320514 1522650 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1028 11:26:52.320586 1522650 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W1028 11:26:52.382189 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:52.382267 1522650 retry.go:31] will retry after 350.486593ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:52.394589 1522650 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1028 11:26:52.394735 1522650 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1028 11:26:52.394717 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:26:52.439424 1522650 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1028 11:26:52.439505 1522650 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1028 11:26:52.439954 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1028 11:26:52.509815 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1028 11:26:52.571870 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:52.571958 1522650 retry.go:31] will retry after 279.599257ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1028 11:26:52.634449 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:52.634522 1522650 retry.go:31] will retry after 455.650149ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1028 11:26:52.674393 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:52.674471 1522650 retry.go:31] will retry after 215.350311ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:52.733673 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1028 11:26:52.837754 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:52.837792 1522650 retry.go:31] will retry after 248.280633ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:52.851959 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:26:52.890257 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1028 11:26:52.967770 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:52.967806 1522650 retry.go:31] will retry after 410.199026ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1028 11:26:53.049496 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:53.049537 1522650 retry.go:31] will retry after 279.498608ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:53.086743 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 11:26:53.091133 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1028 11:26:53.259664 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:53.259699 1522650 retry.go:31] will retry after 526.438393ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1028 11:26:53.259744 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:53.259758 1522650 retry.go:31] will retry after 500.371946ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:53.329567 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1028 11:26:53.378898 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1028 11:26:53.441097 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:53.441134 1522650 retry.go:31] will retry after 556.126662ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1028 11:26:53.519558 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:53.519601 1522650 retry.go:31] will retry after 1.095915489s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:53.760828 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1028 11:26:53.787216 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 11:26:53.793794 1522650 node_ready.go:53] error getting node "old-k8s-version-674802": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-674802": dial tcp 192.168.76.2:8443: connect: connection refused
	W1028 11:26:53.886002 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:53.886081 1522650 retry.go:31] will retry after 758.874318ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1028 11:26:53.962734 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:53.962814 1522650 retry.go:31] will retry after 550.647538ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:53.998122 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1028 11:26:54.100989 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:54.101074 1522650 retry.go:31] will retry after 1.198557101s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:54.514278 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 11:26:54.615754 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1028 11:26:54.622188 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:54.622289 1522650 retry.go:31] will retry after 1.015831792s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:54.645506 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1028 11:26:54.793964 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:54.794074 1522650 retry.go:31] will retry after 1.759219185s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1028 11:26:54.835502 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:54.835589 1522650 retry.go:31] will retry after 1.351958061s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:55.300724 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1028 11:26:55.420315 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:55.420352 1522650 retry.go:31] will retry after 1.848775647s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:55.638820 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1028 11:26:55.733149 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:55.733187 1522650 retry.go:31] will retry after 2.083947839s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:56.188389 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1028 11:26:56.282971 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:56.283000 1522650 retry.go:31] will retry after 2.775690342s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:56.293526 1522650 node_ready.go:53] error getting node "old-k8s-version-674802": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-674802": dial tcp 192.168.76.2:8443: connect: connection refused
	I1028 11:26:56.554030 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1028 11:26:56.660380 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:56.660410 1522650 retry.go:31] will retry after 1.636434797s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:57.269399 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1028 11:26:57.376808 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:57.376854 1522650 retry.go:31] will retry after 1.712727594s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:57.818353 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1028 11:26:57.910864 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:57.910894 1522650 retry.go:31] will retry after 1.53323177s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:58.297721 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1028 11:26:58.407643 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:58.407672 1522650 retry.go:31] will retry after 1.974364231s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:58.793475 1522650 node_ready.go:53] error getting node "old-k8s-version-674802": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-674802": dial tcp 192.168.76.2:8443: connect: connection refused
	I1028 11:26:59.058947 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1028 11:26:59.090344 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1028 11:26:59.181013 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:59.181049 1522650 retry.go:31] will retry after 3.909179468s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1028 11:26:59.246632 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:59.246671 1522650 retry.go:31] will retry after 2.560689734s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:59.444336 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1028 11:26:59.537550 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:26:59.537584 1522650 retry.go:31] will retry after 4.434253189s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 11:27:00.383125 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:27:00.793734 1522650 node_ready.go:53] error getting node "old-k8s-version-674802": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-674802": dial tcp 192.168.76.2:8443: connect: connection refused
	I1028 11:27:01.808284 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1028 11:27:03.091337 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1028 11:27:03.972411 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 11:27:10.874306 1522650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.491139351s)
	W1028 11:27:10.874345 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I1028 11:27:10.874363 1522650 retry.go:31] will retry after 3.978863022s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I1028 11:27:11.293948 1522650 node_ready.go:53] error getting node "old-k8s-version-674802": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-674802": net/http: TLS handshake timeout
	I1028 11:27:11.724020 1522650 node_ready.go:49] node "old-k8s-version-674802" has status "Ready":"True"
	I1028 11:27:11.724049 1522650 node_ready.go:38] duration metric: took 19.931110641s for node "old-k8s-version-674802" to be "Ready" ...
	I1028 11:27:11.724058 1522650 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:27:11.961765 1522650 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-wlp24" in "kube-system" namespace to be "Ready" ...
	I1028 11:27:12.070538 1522650 pod_ready.go:93] pod "coredns-74ff55c5b-wlp24" in "kube-system" namespace has status "Ready":"True"
	I1028 11:27:12.070568 1522650 pod_ready.go:82] duration metric: took 106.197986ms for pod "coredns-74ff55c5b-wlp24" in "kube-system" namespace to be "Ready" ...
	I1028 11:27:12.070582 1522650 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-674802" in "kube-system" namespace to be "Ready" ...
	I1028 11:27:12.102882 1522650 pod_ready.go:93] pod "etcd-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"True"
	I1028 11:27:12.102955 1522650 pod_ready.go:82] duration metric: took 32.364855ms for pod "etcd-old-k8s-version-674802" in "kube-system" namespace to be "Ready" ...
	I1028 11:27:12.102985 1522650 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-674802" in "kube-system" namespace to be "Ready" ...
	I1028 11:27:13.086088 1522650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (9.994713981s)
	I1028 11:27:13.086452 1522650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.114011735s)
	I1028 11:27:13.086518 1522650 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-674802"
	I1028 11:27:13.086602 1522650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (11.278281187s)
	I1028 11:27:13.089571 1522650 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-674802 addons enable metrics-server
	
	I1028 11:27:14.109158 1522650 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
	I1028 11:27:14.854261 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:27:15.629946 1522650 out.go:177] * Enabled addons: metrics-server, dashboard, default-storageclass, storage-provisioner
	I1028 11:27:15.632956 1522650 addons.go:510] duration metric: took 24.118743014s for enable addons: enabled=[metrics-server dashboard default-storageclass storage-provisioner]
	I1028 11:27:16.115452 1522650 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
	I1028 11:27:18.610031 1522650 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
	I1028 11:27:20.609886 1522650 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"True"
	I1028 11:27:20.609911 1522650 pod_ready.go:82] duration metric: took 8.506904709s for pod "kube-apiserver-old-k8s-version-674802" in "kube-system" namespace to be "Ready" ...
	I1028 11:27:20.609924 1522650 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace to be "Ready" ...
	I1028 11:27:22.616429 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
	I1028 11:27:25.116549 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
	I1028 11:27:27.616074 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
	I1028 11:27:29.616632 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
	I1028 11:27:32.117065 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
	I1028 11:27:34.616765 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
	I1028 11:27:36.616869 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
	I1028 11:27:38.619813 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
	I1028 11:27:41.116175 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
	I1028 11:27:43.116575 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
	I1028 11:27:45.118253 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
	I1028 11:27:47.134364 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
	I1028 11:27:49.624345 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
	I1028 11:27:52.116376 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
	I1028 11:27:54.116854 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
	I1028 11:27:56.117962 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
	I1028 11:27:58.616380 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
	I1028 11:28:00.616542 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
	I1028 11:28:02.617251 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
	I1028 11:28:05.115519 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
	I1028 11:28:07.116478 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
	I1028 11:28:09.617133 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
	I1028 11:28:12.173571 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
	I1028 11:28:14.615949 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
	I1028 11:28:16.617708 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
	I1028 11:28:19.115921 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
	I1028 11:28:21.116930 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
	I1028 11:28:23.615332 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
	I1028 11:28:25.618942 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
	I1028 11:28:28.116393 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
	I1028 11:28:30.117263 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
	I1028 11:28:32.616331 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
	I1028 11:28:33.615914 1522650 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"True"
	I1028 11:28:33.615939 1522650 pod_ready.go:82] duration metric: took 1m13.006007915s for pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace to be "Ready" ...
	I1028 11:28:33.615951 1522650 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sdcls" in "kube-system" namespace to be "Ready" ...
	I1028 11:28:33.621161 1522650 pod_ready.go:93] pod "kube-proxy-sdcls" in "kube-system" namespace has status "Ready":"True"
	I1028 11:28:33.621190 1522650 pod_ready.go:82] duration metric: took 5.230393ms for pod "kube-proxy-sdcls" in "kube-system" namespace to be "Ready" ...
	I1028 11:28:33.621203 1522650 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-674802" in "kube-system" namespace to be "Ready" ...
	I1028 11:28:33.626137 1522650 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"True"
	I1028 11:28:33.626166 1522650 pod_ready.go:82] duration metric: took 4.955793ms for pod "kube-scheduler-old-k8s-version-674802" in "kube-system" namespace to be "Ready" ...
	I1028 11:28:33.626177 1522650 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace to be "Ready" ...
	I1028 11:28:35.632722 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:28:37.632853 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:28:40.132535 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:28:42.133063 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:28:44.632409 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:28:47.133283 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:28:49.632557 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:28:51.633555 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:28:54.131835 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:28:56.132042 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:28:58.134501 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:29:00.134811 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:29:02.631708 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:29:05.133327 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:29:07.632559 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:29:10.132196 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:29:12.133134 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:29:14.136072 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:29:16.138475 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:29:18.633285 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:29:21.132111 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:29:23.132241 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:29:25.632359 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:29:27.633222 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:29:29.633651 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:29:32.134732 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:29:34.632661 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:29:36.633305 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:29:39.133679 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:29:41.631857 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:29:43.633865 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:29:46.132741 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:29:48.632693 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:29:50.633998 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:29:53.132201 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:29:55.132868 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:29:57.133028 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:29:59.632114 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:30:01.632659 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:30:04.132047 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:30:06.132395 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:30:08.132439 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:30:10.632541 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:30:12.633400 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:30:15.132962 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:30:17.632661 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:30:20.132524 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:30:22.133109 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:30:24.632048 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:30:26.632153 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:30:28.632624 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:30:30.632947 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:30:33.131694 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:30:35.133071 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:30:37.632517 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:30:39.632741 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:30:42.133251 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:30:44.631573 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:30:46.632230 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:30:48.632354 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:30:50.632946 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:30:53.132077 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:30:55.132720 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:30:57.632352 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:31:00.133619 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:31:02.632622 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:31:05.132476 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:31:07.132537 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:31:09.132882 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:31:11.632721 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:31:13.694163 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:31:16.133028 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:31:18.633339 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:31:21.137509 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:31:23.632552 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:31:25.632827 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:31:28.132669 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:31:30.132988 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:31:32.632463 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:31:35.132552 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:31:37.133003 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:31:39.632266 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:31:41.632492 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:31:44.132365 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:31:46.133181 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:31:48.633037 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:31:51.132608 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:31:53.632395 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:31:56.132659 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:31:58.632631 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:32:00.632696 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:32:03.133533 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:32:05.633050 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:32:08.131970 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:32:10.132316 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:32:12.132996 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:32:14.631556 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:32:16.635524 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:32:19.131917 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:32:21.132458 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:32:23.641916 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:32:26.132630 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:32:28.633137 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:32:31.132354 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:32:33.132407 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
	I1028 11:32:33.631824 1522650 pod_ready.go:82] duration metric: took 4m0.005630495s for pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace to be "Ready" ...
	E1028 11:32:33.631849 1522650 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1028 11:32:33.631859 1522650 pod_ready.go:39] duration metric: took 5m21.907789977s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:32:33.631875 1522650 api_server.go:52] waiting for apiserver process to appear ...
	I1028 11:32:33.631912 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1028 11:32:33.631979 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 11:32:33.671097 1522650 cri.go:89] found id: "c02d779e69c4a6181f499ea147b62985bdd68ffb9d61fe7dab43115ca4318de6"
	I1028 11:32:33.671160 1522650 cri.go:89] found id: "ba54ab63823c2fcfe3e9bc95fca852e480e0d8fae4071a23e1fc38d3e74384cc"
	I1028 11:32:33.671179 1522650 cri.go:89] found id: ""
	I1028 11:32:33.671201 1522650 logs.go:282] 2 containers: [c02d779e69c4a6181f499ea147b62985bdd68ffb9d61fe7dab43115ca4318de6 ba54ab63823c2fcfe3e9bc95fca852e480e0d8fae4071a23e1fc38d3e74384cc]
	I1028 11:32:33.671290 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:33.674939 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:33.678299 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1028 11:32:33.678365 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 11:32:33.718743 1522650 cri.go:89] found id: "6208543cc8b3c7edcccd800e0f9d98e845390bf870426de3226d81781dce3148"
	I1028 11:32:33.718767 1522650 cri.go:89] found id: "01a108b46e6f4f9217c1f90a9611bdbc7956ad16edbfd8093ad46cc6ef34b232"
	I1028 11:32:33.718772 1522650 cri.go:89] found id: ""
	I1028 11:32:33.718780 1522650 logs.go:282] 2 containers: [6208543cc8b3c7edcccd800e0f9d98e845390bf870426de3226d81781dce3148 01a108b46e6f4f9217c1f90a9611bdbc7956ad16edbfd8093ad46cc6ef34b232]
	I1028 11:32:33.718835 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:33.722744 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:33.726556 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1028 11:32:33.726631 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 11:32:33.765910 1522650 cri.go:89] found id: "b864ea5367f07235e01b7c4c4545bda20ba5924d99b8e542c0315227a77c2c59"
	I1028 11:32:33.765934 1522650 cri.go:89] found id: "2a9df06520f732f1766508da84b61f745cb047b5f7bcf5bf3ef9cb3891f6239f"
	I1028 11:32:33.765940 1522650 cri.go:89] found id: ""
	I1028 11:32:33.765947 1522650 logs.go:282] 2 containers: [b864ea5367f07235e01b7c4c4545bda20ba5924d99b8e542c0315227a77c2c59 2a9df06520f732f1766508da84b61f745cb047b5f7bcf5bf3ef9cb3891f6239f]
	I1028 11:32:33.766003 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:33.769566 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:33.772996 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1028 11:32:33.773098 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 11:32:33.812180 1522650 cri.go:89] found id: "31281b2de0e80c98175b18b80c8ece18d25bb88841661719ca7805a5cb795824"
	I1028 11:32:33.812243 1522650 cri.go:89] found id: "857580d96023ba113555b54f38493703fce44c36a25523c35c1fd07c51eee056"
	I1028 11:32:33.812255 1522650 cri.go:89] found id: ""
	I1028 11:32:33.812263 1522650 logs.go:282] 2 containers: [31281b2de0e80c98175b18b80c8ece18d25bb88841661719ca7805a5cb795824 857580d96023ba113555b54f38493703fce44c36a25523c35c1fd07c51eee056]
	I1028 11:32:33.812322 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:33.816121 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:33.819678 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1028 11:32:33.819779 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 11:32:33.857214 1522650 cri.go:89] found id: "c0ed41137fbff35ffcb34df99174bf1cb9e6e2fda2154d421ab797a438e507bf"
	I1028 11:32:33.857283 1522650 cri.go:89] found id: "8d4b3dad3dd90f3ec833f354de4e8225bdaf07199d5245c988b1fdbc527c1015"
	I1028 11:32:33.857297 1522650 cri.go:89] found id: ""
	I1028 11:32:33.857305 1522650 logs.go:282] 2 containers: [c0ed41137fbff35ffcb34df99174bf1cb9e6e2fda2154d421ab797a438e507bf 8d4b3dad3dd90f3ec833f354de4e8225bdaf07199d5245c988b1fdbc527c1015]
	I1028 11:32:33.857368 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:33.860914 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:33.864200 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 11:32:33.864267 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 11:32:33.915951 1522650 cri.go:89] found id: "056d20453e357e86aa3e62b0dd7d945c40ec05cc5f462727941eac3714718438"
	I1028 11:32:33.916024 1522650 cri.go:89] found id: "4937ca78533bbe1e9024be3e8c38035f4fb621e9cfcd8ef6fc974857b5f788d7"
	I1028 11:32:33.916042 1522650 cri.go:89] found id: ""
	I1028 11:32:33.916061 1522650 logs.go:282] 2 containers: [056d20453e357e86aa3e62b0dd7d945c40ec05cc5f462727941eac3714718438 4937ca78533bbe1e9024be3e8c38035f4fb621e9cfcd8ef6fc974857b5f788d7]
	I1028 11:32:33.916149 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:33.919578 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:33.922823 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1028 11:32:33.922911 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 11:32:33.966708 1522650 cri.go:89] found id: "42478c583a7df5e62ae3718bc78fb6be211abb490f9190466870859ec29e3bf3"
	I1028 11:32:33.966732 1522650 cri.go:89] found id: "120e0085c59b7ce7fd3c7afbb14ea7637d4c18b660f3d35631be06f9007e3a33"
	I1028 11:32:33.966737 1522650 cri.go:89] found id: ""
	I1028 11:32:33.966745 1522650 logs.go:282] 2 containers: [42478c583a7df5e62ae3718bc78fb6be211abb490f9190466870859ec29e3bf3 120e0085c59b7ce7fd3c7afbb14ea7637d4c18b660f3d35631be06f9007e3a33]
	I1028 11:32:33.966834 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:33.970820 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:33.974248 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 11:32:33.974365 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 11:32:34.016524 1522650 cri.go:89] found id: "9666309986efcb4076982c7df1d9e0c9f905cbcee4a0e3d7a1dcd2ab0132348b"
	I1028 11:32:34.016553 1522650 cri.go:89] found id: ""
	I1028 11:32:34.016562 1522650 logs.go:282] 1 containers: [9666309986efcb4076982c7df1d9e0c9f905cbcee4a0e3d7a1dcd2ab0132348b]
	I1028 11:32:34.016619 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:34.020458 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1028 11:32:34.020542 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1028 11:32:34.066319 1522650 cri.go:89] found id: "af354fdce961d0c931e3e7b6826943560aa505683dbea93dedbce9a94105e0f8"
	I1028 11:32:34.066393 1522650 cri.go:89] found id: "e4aa22206b37d13d9665658eeeb808da6cd7c1789a0887c07a5f6460c9dd38f5"
	I1028 11:32:34.066411 1522650 cri.go:89] found id: ""
	I1028 11:32:34.066425 1522650 logs.go:282] 2 containers: [af354fdce961d0c931e3e7b6826943560aa505683dbea93dedbce9a94105e0f8 e4aa22206b37d13d9665658eeeb808da6cd7c1789a0887c07a5f6460c9dd38f5]
	I1028 11:32:34.066496 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:34.069913 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:34.073421 1522650 logs.go:123] Gathering logs for describe nodes ...
	I1028 11:32:34.073480 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 11:32:34.210043 1522650 logs.go:123] Gathering logs for etcd [01a108b46e6f4f9217c1f90a9611bdbc7956ad16edbfd8093ad46cc6ef34b232] ...
	I1028 11:32:34.210070 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01a108b46e6f4f9217c1f90a9611bdbc7956ad16edbfd8093ad46cc6ef34b232"
	I1028 11:32:34.253651 1522650 logs.go:123] Gathering logs for coredns [b864ea5367f07235e01b7c4c4545bda20ba5924d99b8e542c0315227a77c2c59] ...
	I1028 11:32:34.253678 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b864ea5367f07235e01b7c4c4545bda20ba5924d99b8e542c0315227a77c2c59"
	I1028 11:32:34.291333 1522650 logs.go:123] Gathering logs for kube-scheduler [31281b2de0e80c98175b18b80c8ece18d25bb88841661719ca7805a5cb795824] ...
	I1028 11:32:34.291362 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31281b2de0e80c98175b18b80c8ece18d25bb88841661719ca7805a5cb795824"
	I1028 11:32:34.331399 1522650 logs.go:123] Gathering logs for kube-controller-manager [056d20453e357e86aa3e62b0dd7d945c40ec05cc5f462727941eac3714718438] ...
	I1028 11:32:34.331554 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 056d20453e357e86aa3e62b0dd7d945c40ec05cc5f462727941eac3714718438"
	I1028 11:32:34.391065 1522650 logs.go:123] Gathering logs for kindnet [120e0085c59b7ce7fd3c7afbb14ea7637d4c18b660f3d35631be06f9007e3a33] ...
	I1028 11:32:34.391103 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 120e0085c59b7ce7fd3c7afbb14ea7637d4c18b660f3d35631be06f9007e3a33"
	I1028 11:32:34.448609 1522650 logs.go:123] Gathering logs for kubernetes-dashboard [9666309986efcb4076982c7df1d9e0c9f905cbcee4a0e3d7a1dcd2ab0132348b] ...
	I1028 11:32:34.448637 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9666309986efcb4076982c7df1d9e0c9f905cbcee4a0e3d7a1dcd2ab0132348b"
	I1028 11:32:34.520640 1522650 logs.go:123] Gathering logs for kubelet ...
	I1028 11:32:34.520667 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1028 11:32:34.575602 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.736433     662 reflector.go:138] object-"default"/"default-token-rkh5t": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-rkh5t" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-674802' and this object
	W1028 11:32:34.575998 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.767488     662 reflector.go:138] object-"kube-system"/"metrics-server-token-bnmqq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-bnmqq" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-674802' and this object
	W1028 11:32:34.576215 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.769521     662 reflector.go:138] object-"kube-system"/"kindnet-token-rljtj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-rljtj" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-674802' and this object
	W1028 11:32:34.576429 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.783592     662 reflector.go:138] object-"kube-system"/"kube-proxy-token-v6b5p": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-v6b5p" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-674802' and this object
	W1028 11:32:34.576637 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.786532     662 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-674802' and this object
	W1028 11:32:34.576861 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.786656     662 reflector.go:138] object-"kube-system"/"storage-provisioner-token-7s7sg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-7s7sg" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-674802' and this object
	W1028 11:32:34.577070 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.786717     662 reflector.go:138] object-"kube-system"/"coredns-token-t6lq7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-t6lq7" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-674802' and this object
	W1028 11:32:34.577270 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.786765     662 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-674802' and this object
	W1028 11:32:34.588025 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:13 old-k8s-version-674802 kubelet[662]: E1028 11:27:13.704595     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1028 11:32:34.590509 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:14 old-k8s-version-674802 kubelet[662]: E1028 11:27:14.680234     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:34.593337 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:27 old-k8s-version-674802 kubelet[662]: E1028 11:27:27.405191     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1028 11:32:34.595471 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:36 old-k8s-version-674802 kubelet[662]: E1028 11:27:36.774117     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:34.595816 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:37 old-k8s-version-674802 kubelet[662]: E1028 11:27:37.779544     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:34.596145 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:38 old-k8s-version-674802 kubelet[662]: E1028 11:27:38.781616     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:34.596330 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:39 old-k8s-version-674802 kubelet[662]: E1028 11:27:39.404833     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:34.597109 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:45 old-k8s-version-674802 kubelet[662]: E1028 11:27:45.806023     662 pod_workers.go:191] Error syncing pod eb6e0fb4-e030-4eb7-8b96-477de7691df6 ("storage-provisioner_kube-system(eb6e0fb4-e030-4eb7-8b96-477de7691df6)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(eb6e0fb4-e030-4eb7-8b96-477de7691df6)"
	W1028 11:32:34.599930 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:51 old-k8s-version-674802 kubelet[662]: E1028 11:27:51.412111     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1028 11:32:34.600524 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:52 old-k8s-version-674802 kubelet[662]: E1028 11:27:52.828529     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:34.601001 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:58 old-k8s-version-674802 kubelet[662]: E1028 11:27:58.495326     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:34.601187 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:06 old-k8s-version-674802 kubelet[662]: E1028 11:28:06.398915     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:34.601514 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:11 old-k8s-version-674802 kubelet[662]: E1028 11:28:11.397340     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:34.601701 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:18 old-k8s-version-674802 kubelet[662]: E1028 11:28:18.397191     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:34.602284 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:25 old-k8s-version-674802 kubelet[662]: E1028 11:28:25.930176     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:34.602842 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:28 old-k8s-version-674802 kubelet[662]: E1028 11:28:28.494838     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:34.605303 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:32 old-k8s-version-674802 kubelet[662]: E1028 11:28:32.410167     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1028 11:32:34.605634 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:39 old-k8s-version-674802 kubelet[662]: E1028 11:28:39.398920     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:34.605818 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:43 old-k8s-version-674802 kubelet[662]: E1028 11:28:43.399781     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:34.606151 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:50 old-k8s-version-674802 kubelet[662]: E1028 11:28:50.396875     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:34.606339 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:56 old-k8s-version-674802 kubelet[662]: E1028 11:28:56.397361     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:34.606668 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:02 old-k8s-version-674802 kubelet[662]: E1028 11:29:02.396819     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:34.606849 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:08 old-k8s-version-674802 kubelet[662]: E1028 11:29:08.398191     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:34.607431 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:14 old-k8s-version-674802 kubelet[662]: E1028 11:29:14.078563     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:34.607764 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:18 old-k8s-version-674802 kubelet[662]: E1028 11:29:18.494837     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:34.607959 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:22 old-k8s-version-674802 kubelet[662]: E1028 11:29:22.397401     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:34.608289 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:29 old-k8s-version-674802 kubelet[662]: E1028 11:29:29.397589     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:34.608471 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:33 old-k8s-version-674802 kubelet[662]: E1028 11:29:33.397285     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:34.608798 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:40 old-k8s-version-674802 kubelet[662]: E1028 11:29:40.396913     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:34.608982 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:45 old-k8s-version-674802 kubelet[662]: E1028 11:29:45.397674     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:34.609307 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:52 old-k8s-version-674802 kubelet[662]: E1028 11:29:52.396836     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:34.611779 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:57 old-k8s-version-674802 kubelet[662]: E1028 11:29:57.431254     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1028 11:32:34.612107 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:06 old-k8s-version-674802 kubelet[662]: E1028 11:30:06.396711     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:34.612291 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:12 old-k8s-version-674802 kubelet[662]: E1028 11:30:12.397420     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:34.612615 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:19 old-k8s-version-674802 kubelet[662]: E1028 11:30:19.397829     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:34.612796 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:23 old-k8s-version-674802 kubelet[662]: E1028 11:30:23.400863     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:34.613385 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:35 old-k8s-version-674802 kubelet[662]: E1028 11:30:35.284773     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:34.613566 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:37 old-k8s-version-674802 kubelet[662]: E1028 11:30:37.398847     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:34.613890 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:38 old-k8s-version-674802 kubelet[662]: E1028 11:30:38.494821     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:34.614074 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:52 old-k8s-version-674802 kubelet[662]: E1028 11:30:52.398247     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:34.614401 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:53 old-k8s-version-674802 kubelet[662]: E1028 11:30:53.397067     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:34.614724 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:04 old-k8s-version-674802 kubelet[662]: E1028 11:31:04.396824     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:34.614909 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:04 old-k8s-version-674802 kubelet[662]: E1028 11:31:04.398636     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:34.615234 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:18 old-k8s-version-674802 kubelet[662]: E1028 11:31:18.397153     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:34.615415 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:18 old-k8s-version-674802 kubelet[662]: E1028 11:31:18.397448     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:34.615751 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:30 old-k8s-version-674802 kubelet[662]: E1028 11:31:30.396768     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:34.615935 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:32 old-k8s-version-674802 kubelet[662]: E1028 11:31:32.397834     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:34.616260 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:42 old-k8s-version-674802 kubelet[662]: E1028 11:31:42.396795     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:34.616442 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:47 old-k8s-version-674802 kubelet[662]: E1028 11:31:47.398090     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:34.616766 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:56 old-k8s-version-674802 kubelet[662]: E1028 11:31:56.396772     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:34.616947 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:58 old-k8s-version-674802 kubelet[662]: E1028 11:31:58.397362     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:34.617274 1522650 logs.go:138] Found kubelet problem: Oct 28 11:32:07 old-k8s-version-674802 kubelet[662]: E1028 11:32:07.399844     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:34.617455 1522650 logs.go:138] Found kubelet problem: Oct 28 11:32:13 old-k8s-version-674802 kubelet[662]: E1028 11:32:13.397227     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:34.617780 1522650 logs.go:138] Found kubelet problem: Oct 28 11:32:19 old-k8s-version-674802 kubelet[662]: E1028 11:32:19.398441     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:34.617961 1522650 logs.go:138] Found kubelet problem: Oct 28 11:32:28 old-k8s-version-674802 kubelet[662]: E1028 11:32:28.398320     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:34.618293 1522650 logs.go:138] Found kubelet problem: Oct 28 11:32:34 old-k8s-version-674802 kubelet[662]: E1028 11:32:34.396869     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	I1028 11:32:34.618303 1522650 logs.go:123] Gathering logs for dmesg ...
	I1028 11:32:34.618317 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 11:32:34.637281 1522650 logs.go:123] Gathering logs for kube-apiserver [c02d779e69c4a6181f499ea147b62985bdd68ffb9d61fe7dab43115ca4318de6] ...
	I1028 11:32:34.637308 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c02d779e69c4a6181f499ea147b62985bdd68ffb9d61fe7dab43115ca4318de6"
	I1028 11:32:34.701952 1522650 logs.go:123] Gathering logs for kube-apiserver [ba54ab63823c2fcfe3e9bc95fca852e480e0d8fae4071a23e1fc38d3e74384cc] ...
	I1028 11:32:34.701983 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba54ab63823c2fcfe3e9bc95fca852e480e0d8fae4071a23e1fc38d3e74384cc"
	I1028 11:32:34.753229 1522650 logs.go:123] Gathering logs for coredns [2a9df06520f732f1766508da84b61f745cb047b5f7bcf5bf3ef9cb3891f6239f] ...
	I1028 11:32:34.753267 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a9df06520f732f1766508da84b61f745cb047b5f7bcf5bf3ef9cb3891f6239f"
	I1028 11:32:34.818573 1522650 logs.go:123] Gathering logs for container status ...
	I1028 11:32:34.818602 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 11:32:34.863856 1522650 logs.go:123] Gathering logs for kube-proxy [c0ed41137fbff35ffcb34df99174bf1cb9e6e2fda2154d421ab797a438e507bf] ...
	I1028 11:32:34.863883 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0ed41137fbff35ffcb34df99174bf1cb9e6e2fda2154d421ab797a438e507bf"
	I1028 11:32:34.912972 1522650 logs.go:123] Gathering logs for kindnet [42478c583a7df5e62ae3718bc78fb6be211abb490f9190466870859ec29e3bf3] ...
	I1028 11:32:34.913001 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42478c583a7df5e62ae3718bc78fb6be211abb490f9190466870859ec29e3bf3"
	I1028 11:32:34.953865 1522650 logs.go:123] Gathering logs for containerd ...
	I1028 11:32:34.953893 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1028 11:32:35.019851 1522650 logs.go:123] Gathering logs for storage-provisioner [e4aa22206b37d13d9665658eeeb808da6cd7c1789a0887c07a5f6460c9dd38f5] ...
	I1028 11:32:35.019890 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e4aa22206b37d13d9665658eeeb808da6cd7c1789a0887c07a5f6460c9dd38f5"
	I1028 11:32:35.059465 1522650 logs.go:123] Gathering logs for etcd [6208543cc8b3c7edcccd800e0f9d98e845390bf870426de3226d81781dce3148] ...
	I1028 11:32:35.059490 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6208543cc8b3c7edcccd800e0f9d98e845390bf870426de3226d81781dce3148"
	I1028 11:32:35.105788 1522650 logs.go:123] Gathering logs for kube-scheduler [857580d96023ba113555b54f38493703fce44c36a25523c35c1fd07c51eee056] ...
	I1028 11:32:35.105818 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 857580d96023ba113555b54f38493703fce44c36a25523c35c1fd07c51eee056"
	I1028 11:32:35.147379 1522650 logs.go:123] Gathering logs for kube-proxy [8d4b3dad3dd90f3ec833f354de4e8225bdaf07199d5245c988b1fdbc527c1015] ...
	I1028 11:32:35.147422 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d4b3dad3dd90f3ec833f354de4e8225bdaf07199d5245c988b1fdbc527c1015"
	I1028 11:32:35.184732 1522650 logs.go:123] Gathering logs for kube-controller-manager [4937ca78533bbe1e9024be3e8c38035f4fb621e9cfcd8ef6fc974857b5f788d7] ...
	I1028 11:32:35.184759 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4937ca78533bbe1e9024be3e8c38035f4fb621e9cfcd8ef6fc974857b5f788d7"
	I1028 11:32:35.257229 1522650 logs.go:123] Gathering logs for storage-provisioner [af354fdce961d0c931e3e7b6826943560aa505683dbea93dedbce9a94105e0f8] ...
	I1028 11:32:35.257265 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af354fdce961d0c931e3e7b6826943560aa505683dbea93dedbce9a94105e0f8"
	I1028 11:32:35.307930 1522650 out.go:358] Setting ErrFile to fd 2...
	I1028 11:32:35.307955 1522650 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1028 11:32:35.308039 1522650 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1028 11:32:35.308052 1522650 out.go:270]   Oct 28 11:32:07 old-k8s-version-674802 kubelet[662]: E1028 11:32:07.399844     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	  Oct 28 11:32:07 old-k8s-version-674802 kubelet[662]: E1028 11:32:07.399844     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:35.308074 1522650 out.go:270]   Oct 28 11:32:13 old-k8s-version-674802 kubelet[662]: E1028 11:32:13.397227     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 28 11:32:13 old-k8s-version-674802 kubelet[662]: E1028 11:32:13.397227     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:35.308084 1522650 out.go:270]   Oct 28 11:32:19 old-k8s-version-674802 kubelet[662]: E1028 11:32:19.398441     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	  Oct 28 11:32:19 old-k8s-version-674802 kubelet[662]: E1028 11:32:19.398441     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:35.308089 1522650 out.go:270]   Oct 28 11:32:28 old-k8s-version-674802 kubelet[662]: E1028 11:32:28.398320     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 28 11:32:28 old-k8s-version-674802 kubelet[662]: E1028 11:32:28.398320     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:35.308094 1522650 out.go:270]   Oct 28 11:32:34 old-k8s-version-674802 kubelet[662]: E1028 11:32:34.396869     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	  Oct 28 11:32:34 old-k8s-version-674802 kubelet[662]: E1028 11:32:34.396869     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	I1028 11:32:35.308105 1522650 out.go:358] Setting ErrFile to fd 2...
	I1028 11:32:35.308112 1522650 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:32:45.308563 1522650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:32:45.323590 1522650 api_server.go:72] duration metric: took 5m53.809706364s to wait for apiserver process to appear ...
	I1028 11:32:45.323614 1522650 api_server.go:88] waiting for apiserver healthz status ...
	I1028 11:32:45.323723 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1028 11:32:45.323782 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 11:32:45.369849 1522650 cri.go:89] found id: "c02d779e69c4a6181f499ea147b62985bdd68ffb9d61fe7dab43115ca4318de6"
	I1028 11:32:45.369868 1522650 cri.go:89] found id: "ba54ab63823c2fcfe3e9bc95fca852e480e0d8fae4071a23e1fc38d3e74384cc"
	I1028 11:32:45.369873 1522650 cri.go:89] found id: ""
	I1028 11:32:45.369880 1522650 logs.go:282] 2 containers: [c02d779e69c4a6181f499ea147b62985bdd68ffb9d61fe7dab43115ca4318de6 ba54ab63823c2fcfe3e9bc95fca852e480e0d8fae4071a23e1fc38d3e74384cc]
	I1028 11:32:45.369934 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:45.374584 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:45.379004 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1028 11:32:45.379074 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 11:32:45.433346 1522650 cri.go:89] found id: "6208543cc8b3c7edcccd800e0f9d98e845390bf870426de3226d81781dce3148"
	I1028 11:32:45.433423 1522650 cri.go:89] found id: "01a108b46e6f4f9217c1f90a9611bdbc7956ad16edbfd8093ad46cc6ef34b232"
	I1028 11:32:45.433442 1522650 cri.go:89] found id: ""
	I1028 11:32:45.433462 1522650 logs.go:282] 2 containers: [6208543cc8b3c7edcccd800e0f9d98e845390bf870426de3226d81781dce3148 01a108b46e6f4f9217c1f90a9611bdbc7956ad16edbfd8093ad46cc6ef34b232]
	I1028 11:32:45.433546 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:45.438315 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:45.441971 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1028 11:32:45.442046 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 11:32:45.507419 1522650 cri.go:89] found id: "b864ea5367f07235e01b7c4c4545bda20ba5924d99b8e542c0315227a77c2c59"
	I1028 11:32:45.507441 1522650 cri.go:89] found id: "2a9df06520f732f1766508da84b61f745cb047b5f7bcf5bf3ef9cb3891f6239f"
	I1028 11:32:45.507446 1522650 cri.go:89] found id: ""
	I1028 11:32:45.507453 1522650 logs.go:282] 2 containers: [b864ea5367f07235e01b7c4c4545bda20ba5924d99b8e542c0315227a77c2c59 2a9df06520f732f1766508da84b61f745cb047b5f7bcf5bf3ef9cb3891f6239f]
	I1028 11:32:45.507510 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:45.513603 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:45.517373 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1028 11:32:45.517452 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 11:32:45.565346 1522650 cri.go:89] found id: "31281b2de0e80c98175b18b80c8ece18d25bb88841661719ca7805a5cb795824"
	I1028 11:32:45.565381 1522650 cri.go:89] found id: "857580d96023ba113555b54f38493703fce44c36a25523c35c1fd07c51eee056"
	I1028 11:32:45.565386 1522650 cri.go:89] found id: ""
	I1028 11:32:45.565393 1522650 logs.go:282] 2 containers: [31281b2de0e80c98175b18b80c8ece18d25bb88841661719ca7805a5cb795824 857580d96023ba113555b54f38493703fce44c36a25523c35c1fd07c51eee056]
	I1028 11:32:45.565455 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:45.569124 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:45.572626 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1028 11:32:45.572699 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 11:32:45.624046 1522650 cri.go:89] found id: "c0ed41137fbff35ffcb34df99174bf1cb9e6e2fda2154d421ab797a438e507bf"
	I1028 11:32:45.624079 1522650 cri.go:89] found id: "8d4b3dad3dd90f3ec833f354de4e8225bdaf07199d5245c988b1fdbc527c1015"
	I1028 11:32:45.624084 1522650 cri.go:89] found id: ""
	I1028 11:32:45.624091 1522650 logs.go:282] 2 containers: [c0ed41137fbff35ffcb34df99174bf1cb9e6e2fda2154d421ab797a438e507bf 8d4b3dad3dd90f3ec833f354de4e8225bdaf07199d5245c988b1fdbc527c1015]
	I1028 11:32:45.624152 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:45.627765 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:45.631106 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 11:32:45.631183 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 11:32:45.680421 1522650 cri.go:89] found id: "056d20453e357e86aa3e62b0dd7d945c40ec05cc5f462727941eac3714718438"
	I1028 11:32:45.680444 1522650 cri.go:89] found id: "4937ca78533bbe1e9024be3e8c38035f4fb621e9cfcd8ef6fc974857b5f788d7"
	I1028 11:32:45.680460 1522650 cri.go:89] found id: ""
	I1028 11:32:45.680468 1522650 logs.go:282] 2 containers: [056d20453e357e86aa3e62b0dd7d945c40ec05cc5f462727941eac3714718438 4937ca78533bbe1e9024be3e8c38035f4fb621e9cfcd8ef6fc974857b5f788d7]
	I1028 11:32:45.680531 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:45.684137 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:45.687407 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1028 11:32:45.687486 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 11:32:45.741649 1522650 cri.go:89] found id: "42478c583a7df5e62ae3718bc78fb6be211abb490f9190466870859ec29e3bf3"
	I1028 11:32:45.741671 1522650 cri.go:89] found id: "120e0085c59b7ce7fd3c7afbb14ea7637d4c18b660f3d35631be06f9007e3a33"
	I1028 11:32:45.741675 1522650 cri.go:89] found id: ""
	I1028 11:32:45.741683 1522650 logs.go:282] 2 containers: [42478c583a7df5e62ae3718bc78fb6be211abb490f9190466870859ec29e3bf3 120e0085c59b7ce7fd3c7afbb14ea7637d4c18b660f3d35631be06f9007e3a33]
	I1028 11:32:45.741741 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:45.745863 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:45.749779 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 11:32:45.749843 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 11:32:45.801413 1522650 cri.go:89] found id: "9666309986efcb4076982c7df1d9e0c9f905cbcee4a0e3d7a1dcd2ab0132348b"
	I1028 11:32:45.801471 1522650 cri.go:89] found id: ""
	I1028 11:32:45.801481 1522650 logs.go:282] 1 containers: [9666309986efcb4076982c7df1d9e0c9f905cbcee4a0e3d7a1dcd2ab0132348b]
	I1028 11:32:45.801539 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:45.805656 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1028 11:32:45.805718 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1028 11:32:45.904645 1522650 cri.go:89] found id: "af354fdce961d0c931e3e7b6826943560aa505683dbea93dedbce9a94105e0f8"
	I1028 11:32:45.904670 1522650 cri.go:89] found id: "e4aa22206b37d13d9665658eeeb808da6cd7c1789a0887c07a5f6460c9dd38f5"
	I1028 11:32:45.904675 1522650 cri.go:89] found id: ""
	I1028 11:32:45.904682 1522650 logs.go:282] 2 containers: [af354fdce961d0c931e3e7b6826943560aa505683dbea93dedbce9a94105e0f8 e4aa22206b37d13d9665658eeeb808da6cd7c1789a0887c07a5f6460c9dd38f5]
	I1028 11:32:45.904738 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:45.910966 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:45.917821 1522650 logs.go:123] Gathering logs for coredns [2a9df06520f732f1766508da84b61f745cb047b5f7bcf5bf3ef9cb3891f6239f] ...
	I1028 11:32:45.917843 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a9df06520f732f1766508da84b61f745cb047b5f7bcf5bf3ef9cb3891f6239f"
	I1028 11:32:45.972711 1522650 logs.go:123] Gathering logs for kindnet [120e0085c59b7ce7fd3c7afbb14ea7637d4c18b660f3d35631be06f9007e3a33] ...
	I1028 11:32:45.972737 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 120e0085c59b7ce7fd3c7afbb14ea7637d4c18b660f3d35631be06f9007e3a33"
	I1028 11:32:46.067160 1522650 logs.go:123] Gathering logs for kubelet ...
	I1028 11:32:46.067189 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1028 11:32:46.128234 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.736433     662 reflector.go:138] object-"default"/"default-token-rkh5t": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-rkh5t" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-674802' and this object
	W1028 11:32:46.128574 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.767488     662 reflector.go:138] object-"kube-system"/"metrics-server-token-bnmqq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-bnmqq" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-674802' and this object
	W1028 11:32:46.128790 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.769521     662 reflector.go:138] object-"kube-system"/"kindnet-token-rljtj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-rljtj" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-674802' and this object
	W1028 11:32:46.129006 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.783592     662 reflector.go:138] object-"kube-system"/"kube-proxy-token-v6b5p": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-v6b5p" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-674802' and this object
	W1028 11:32:46.129208 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.786532     662 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-674802' and this object
	W1028 11:32:46.129433 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.786656     662 reflector.go:138] object-"kube-system"/"storage-provisioner-token-7s7sg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-7s7sg" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-674802' and this object
	W1028 11:32:46.129655 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.786717     662 reflector.go:138] object-"kube-system"/"coredns-token-t6lq7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-t6lq7" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-674802' and this object
	W1028 11:32:46.129858 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.786765     662 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-674802' and this object
	W1028 11:32:46.140618 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:13 old-k8s-version-674802 kubelet[662]: E1028 11:27:13.704595     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1028 11:32:46.143017 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:14 old-k8s-version-674802 kubelet[662]: E1028 11:27:14.680234     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:46.145830 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:27 old-k8s-version-674802 kubelet[662]: E1028 11:27:27.405191     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1028 11:32:46.148040 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:36 old-k8s-version-674802 kubelet[662]: E1028 11:27:36.774117     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.148376 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:37 old-k8s-version-674802 kubelet[662]: E1028 11:27:37.779544     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.148704 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:38 old-k8s-version-674802 kubelet[662]: E1028 11:27:38.781616     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.148892 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:39 old-k8s-version-674802 kubelet[662]: E1028 11:27:39.404833     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:46.149721 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:45 old-k8s-version-674802 kubelet[662]: E1028 11:27:45.806023     662 pod_workers.go:191] Error syncing pod eb6e0fb4-e030-4eb7-8b96-477de7691df6 ("storage-provisioner_kube-system(eb6e0fb4-e030-4eb7-8b96-477de7691df6)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(eb6e0fb4-e030-4eb7-8b96-477de7691df6)"
	W1028 11:32:46.152611 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:51 old-k8s-version-674802 kubelet[662]: E1028 11:27:51.412111     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1028 11:32:46.153207 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:52 old-k8s-version-674802 kubelet[662]: E1028 11:27:52.828529     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.153668 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:58 old-k8s-version-674802 kubelet[662]: E1028 11:27:58.495326     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.153849 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:06 old-k8s-version-674802 kubelet[662]: E1028 11:28:06.398915     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:46.154173 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:11 old-k8s-version-674802 kubelet[662]: E1028 11:28:11.397340     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.154353 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:18 old-k8s-version-674802 kubelet[662]: E1028 11:28:18.397191     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:46.154935 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:25 old-k8s-version-674802 kubelet[662]: E1028 11:28:25.930176     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.155258 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:28 old-k8s-version-674802 kubelet[662]: E1028 11:28:28.494838     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.157743 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:32 old-k8s-version-674802 kubelet[662]: E1028 11:28:32.410167     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1028 11:32:46.158074 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:39 old-k8s-version-674802 kubelet[662]: E1028 11:28:39.398920     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.158257 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:43 old-k8s-version-674802 kubelet[662]: E1028 11:28:43.399781     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:46.158584 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:50 old-k8s-version-674802 kubelet[662]: E1028 11:28:50.396875     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.158767 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:56 old-k8s-version-674802 kubelet[662]: E1028 11:28:56.397361     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:46.159090 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:02 old-k8s-version-674802 kubelet[662]: E1028 11:29:02.396819     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.159293 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:08 old-k8s-version-674802 kubelet[662]: E1028 11:29:08.398191     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:46.159946 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:14 old-k8s-version-674802 kubelet[662]: E1028 11:29:14.078563     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.160290 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:18 old-k8s-version-674802 kubelet[662]: E1028 11:29:18.494837     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.160481 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:22 old-k8s-version-674802 kubelet[662]: E1028 11:29:22.397401     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:46.160804 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:29 old-k8s-version-674802 kubelet[662]: E1028 11:29:29.397589     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.160987 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:33 old-k8s-version-674802 kubelet[662]: E1028 11:29:33.397285     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:46.161311 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:40 old-k8s-version-674802 kubelet[662]: E1028 11:29:40.396913     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.161495 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:45 old-k8s-version-674802 kubelet[662]: E1028 11:29:45.397674     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:46.161821 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:52 old-k8s-version-674802 kubelet[662]: E1028 11:29:52.396836     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.164259 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:57 old-k8s-version-674802 kubelet[662]: E1028 11:29:57.431254     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1028 11:32:46.164587 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:06 old-k8s-version-674802 kubelet[662]: E1028 11:30:06.396711     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.164771 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:12 old-k8s-version-674802 kubelet[662]: E1028 11:30:12.397420     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:46.165094 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:19 old-k8s-version-674802 kubelet[662]: E1028 11:30:19.397829     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.165276 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:23 old-k8s-version-674802 kubelet[662]: E1028 11:30:23.400863     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:46.165874 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:35 old-k8s-version-674802 kubelet[662]: E1028 11:30:35.284773     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.166056 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:37 old-k8s-version-674802 kubelet[662]: E1028 11:30:37.398847     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:46.166378 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:38 old-k8s-version-674802 kubelet[662]: E1028 11:30:38.494821     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.166560 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:52 old-k8s-version-674802 kubelet[662]: E1028 11:30:52.398247     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:46.166884 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:53 old-k8s-version-674802 kubelet[662]: E1028 11:30:53.397067     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.167207 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:04 old-k8s-version-674802 kubelet[662]: E1028 11:31:04.396824     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.167388 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:04 old-k8s-version-674802 kubelet[662]: E1028 11:31:04.398636     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:46.167725 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:18 old-k8s-version-674802 kubelet[662]: E1028 11:31:18.397153     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.167908 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:18 old-k8s-version-674802 kubelet[662]: E1028 11:31:18.397448     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:46.168236 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:30 old-k8s-version-674802 kubelet[662]: E1028 11:31:30.396768     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.168418 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:32 old-k8s-version-674802 kubelet[662]: E1028 11:31:32.397834     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:46.168745 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:42 old-k8s-version-674802 kubelet[662]: E1028 11:31:42.396795     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.168925 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:47 old-k8s-version-674802 kubelet[662]: E1028 11:31:47.398090     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:46.169249 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:56 old-k8s-version-674802 kubelet[662]: E1028 11:31:56.396772     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.169434 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:58 old-k8s-version-674802 kubelet[662]: E1028 11:31:58.397362     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:46.169758 1522650 logs.go:138] Found kubelet problem: Oct 28 11:32:07 old-k8s-version-674802 kubelet[662]: E1028 11:32:07.399844     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.169942 1522650 logs.go:138] Found kubelet problem: Oct 28 11:32:13 old-k8s-version-674802 kubelet[662]: E1028 11:32:13.397227     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:46.170339 1522650 logs.go:138] Found kubelet problem: Oct 28 11:32:19 old-k8s-version-674802 kubelet[662]: E1028 11:32:19.398441     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.170526 1522650 logs.go:138] Found kubelet problem: Oct 28 11:32:28 old-k8s-version-674802 kubelet[662]: E1028 11:32:28.398320     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:46.170854 1522650 logs.go:138] Found kubelet problem: Oct 28 11:32:34 old-k8s-version-674802 kubelet[662]: E1028 11:32:34.396869     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.173305 1522650 logs.go:138] Found kubelet problem: Oct 28 11:32:39 old-k8s-version-674802 kubelet[662]: E1028 11:32:39.419884     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	I1028 11:32:46.173316 1522650 logs.go:123] Gathering logs for etcd [6208543cc8b3c7edcccd800e0f9d98e845390bf870426de3226d81781dce3148] ...
	I1028 11:32:46.173330 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6208543cc8b3c7edcccd800e0f9d98e845390bf870426de3226d81781dce3148"
	I1028 11:32:46.223969 1522650 logs.go:123] Gathering logs for kube-proxy [c0ed41137fbff35ffcb34df99174bf1cb9e6e2fda2154d421ab797a438e507bf] ...
	I1028 11:32:46.224006 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0ed41137fbff35ffcb34df99174bf1cb9e6e2fda2154d421ab797a438e507bf"
	I1028 11:32:46.277259 1522650 logs.go:123] Gathering logs for kubernetes-dashboard [9666309986efcb4076982c7df1d9e0c9f905cbcee4a0e3d7a1dcd2ab0132348b] ...
	I1028 11:32:46.277289 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9666309986efcb4076982c7df1d9e0c9f905cbcee4a0e3d7a1dcd2ab0132348b"
	I1028 11:32:46.337485 1522650 logs.go:123] Gathering logs for storage-provisioner [e4aa22206b37d13d9665658eeeb808da6cd7c1789a0887c07a5f6460c9dd38f5] ...
	I1028 11:32:46.337520 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e4aa22206b37d13d9665658eeeb808da6cd7c1789a0887c07a5f6460c9dd38f5"
	I1028 11:32:46.395372 1522650 logs.go:123] Gathering logs for container status ...
	I1028 11:32:46.395422 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 11:32:46.450094 1522650 logs.go:123] Gathering logs for describe nodes ...
	I1028 11:32:46.450127 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 11:32:46.647254 1522650 logs.go:123] Gathering logs for kube-scheduler [31281b2de0e80c98175b18b80c8ece18d25bb88841661719ca7805a5cb795824] ...
	I1028 11:32:46.647867 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31281b2de0e80c98175b18b80c8ece18d25bb88841661719ca7805a5cb795824"
	I1028 11:32:46.698393 1522650 logs.go:123] Gathering logs for kube-controller-manager [056d20453e357e86aa3e62b0dd7d945c40ec05cc5f462727941eac3714718438] ...
	I1028 11:32:46.698421 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 056d20453e357e86aa3e62b0dd7d945c40ec05cc5f462727941eac3714718438"
	I1028 11:32:46.756978 1522650 logs.go:123] Gathering logs for kube-controller-manager [4937ca78533bbe1e9024be3e8c38035f4fb621e9cfcd8ef6fc974857b5f788d7] ...
	I1028 11:32:46.757015 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4937ca78533bbe1e9024be3e8c38035f4fb621e9cfcd8ef6fc974857b5f788d7"
	I1028 11:32:46.831122 1522650 logs.go:123] Gathering logs for kindnet [42478c583a7df5e62ae3718bc78fb6be211abb490f9190466870859ec29e3bf3] ...
	I1028 11:32:46.831163 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42478c583a7df5e62ae3718bc78fb6be211abb490f9190466870859ec29e3bf3"
	I1028 11:32:46.880307 1522650 logs.go:123] Gathering logs for etcd [01a108b46e6f4f9217c1f90a9611bdbc7956ad16edbfd8093ad46cc6ef34b232] ...
	I1028 11:32:46.880340 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01a108b46e6f4f9217c1f90a9611bdbc7956ad16edbfd8093ad46cc6ef34b232"
	I1028 11:32:46.936132 1522650 logs.go:123] Gathering logs for kube-scheduler [857580d96023ba113555b54f38493703fce44c36a25523c35c1fd07c51eee056] ...
	I1028 11:32:46.936165 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 857580d96023ba113555b54f38493703fce44c36a25523c35c1fd07c51eee056"
	I1028 11:32:46.982104 1522650 logs.go:123] Gathering logs for kube-apiserver [ba54ab63823c2fcfe3e9bc95fca852e480e0d8fae4071a23e1fc38d3e74384cc] ...
	I1028 11:32:46.982133 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba54ab63823c2fcfe3e9bc95fca852e480e0d8fae4071a23e1fc38d3e74384cc"
	I1028 11:32:47.048875 1522650 logs.go:123] Gathering logs for coredns [b864ea5367f07235e01b7c4c4545bda20ba5924d99b8e542c0315227a77c2c59] ...
	I1028 11:32:47.048911 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b864ea5367f07235e01b7c4c4545bda20ba5924d99b8e542c0315227a77c2c59"
	I1028 11:32:47.093129 1522650 logs.go:123] Gathering logs for kube-proxy [8d4b3dad3dd90f3ec833f354de4e8225bdaf07199d5245c988b1fdbc527c1015] ...
	I1028 11:32:47.093157 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d4b3dad3dd90f3ec833f354de4e8225bdaf07199d5245c988b1fdbc527c1015"
	I1028 11:32:47.132824 1522650 logs.go:123] Gathering logs for storage-provisioner [af354fdce961d0c931e3e7b6826943560aa505683dbea93dedbce9a94105e0f8] ...
	I1028 11:32:47.132849 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af354fdce961d0c931e3e7b6826943560aa505683dbea93dedbce9a94105e0f8"
	I1028 11:32:47.172011 1522650 logs.go:123] Gathering logs for containerd ...
	I1028 11:32:47.172037 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1028 11:32:47.239434 1522650 logs.go:123] Gathering logs for dmesg ...
	I1028 11:32:47.239469 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 11:32:47.257467 1522650 logs.go:123] Gathering logs for kube-apiserver [c02d779e69c4a6181f499ea147b62985bdd68ffb9d61fe7dab43115ca4318de6] ...
	I1028 11:32:47.257498 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c02d779e69c4a6181f499ea147b62985bdd68ffb9d61fe7dab43115ca4318de6"
	I1028 11:32:47.317252 1522650 out.go:358] Setting ErrFile to fd 2...
	I1028 11:32:47.317286 1522650 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1028 11:32:47.317350 1522650 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1028 11:32:47.317369 1522650 out.go:270]   Oct 28 11:32:13 old-k8s-version-674802 kubelet[662]: E1028 11:32:13.397227     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 28 11:32:13 old-k8s-version-674802 kubelet[662]: E1028 11:32:13.397227     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:47.317384 1522650 out.go:270]   Oct 28 11:32:19 old-k8s-version-674802 kubelet[662]: E1028 11:32:19.398441     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	  Oct 28 11:32:19 old-k8s-version-674802 kubelet[662]: E1028 11:32:19.398441     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:47.317397 1522650 out.go:270]   Oct 28 11:32:28 old-k8s-version-674802 kubelet[662]: E1028 11:32:28.398320     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 28 11:32:28 old-k8s-version-674802 kubelet[662]: E1028 11:32:28.398320     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:47.317405 1522650 out.go:270]   Oct 28 11:32:34 old-k8s-version-674802 kubelet[662]: E1028 11:32:34.396869     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	  Oct 28 11:32:34 old-k8s-version-674802 kubelet[662]: E1028 11:32:34.396869     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:47.317419 1522650 out.go:270]   Oct 28 11:32:39 old-k8s-version-674802 kubelet[662]: E1028 11:32:39.419884     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	  Oct 28 11:32:39 old-k8s-version-674802 kubelet[662]: E1028 11:32:39.419884     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	I1028 11:32:47.317439 1522650 out.go:358] Setting ErrFile to fd 2...
	I1028 11:32:47.317446 1522650 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:32:57.318433 1522650 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1028 11:32:57.330588 1522650 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1028 11:32:57.332171 1522650 out.go:201] 
	W1028 11:32:57.333624 1522650 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W1028 11:32:57.333844 1522650 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W1028 11:32:57.333985 1522650 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W1028 11:32:57.334048 1522650 out.go:270] * 
	* 
	W1028 11:32:57.335332 1522650 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 11:32:57.337475 1522650 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-674802 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-674802
helpers_test.go:235: (dbg) docker inspect old-k8s-version-674802:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dc3c31f51b66fcbdecd16e7e2130cc4bcf0676abcdfd75db06584109bc354ba7",
	        "Created": "2024-10-28T11:24:04.195440534Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1522844,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-28T11:26:43.172293132Z",
	            "FinishedAt": "2024-10-28T11:26:42.190662828Z"
	        },
	        "Image": "sha256:e536a13478ac3e12b0286f2242f0931e32c32cc3eeb0139a219c9133dcd9fe90",
	        "ResolvConfPath": "/var/lib/docker/containers/dc3c31f51b66fcbdecd16e7e2130cc4bcf0676abcdfd75db06584109bc354ba7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dc3c31f51b66fcbdecd16e7e2130cc4bcf0676abcdfd75db06584109bc354ba7/hostname",
	        "HostsPath": "/var/lib/docker/containers/dc3c31f51b66fcbdecd16e7e2130cc4bcf0676abcdfd75db06584109bc354ba7/hosts",
	        "LogPath": "/var/lib/docker/containers/dc3c31f51b66fcbdecd16e7e2130cc4bcf0676abcdfd75db06584109bc354ba7/dc3c31f51b66fcbdecd16e7e2130cc4bcf0676abcdfd75db06584109bc354ba7-json.log",
	        "Name": "/old-k8s-version-674802",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-674802:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-674802",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/959053794ac2dcd3b48881ed4ff6293f07dc6e98a0aafbb16fb787c58523d221-init/diff:/var/lib/docker/overlay2/3a4c28ee2a9f0b48a71bf9958e5e93be9c21155427c18565406f15d470c50d00/diff",
	                "MergedDir": "/var/lib/docker/overlay2/959053794ac2dcd3b48881ed4ff6293f07dc6e98a0aafbb16fb787c58523d221/merged",
	                "UpperDir": "/var/lib/docker/overlay2/959053794ac2dcd3b48881ed4ff6293f07dc6e98a0aafbb16fb787c58523d221/diff",
	                "WorkDir": "/var/lib/docker/overlay2/959053794ac2dcd3b48881ed4ff6293f07dc6e98a0aafbb16fb787c58523d221/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-674802",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-674802/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-674802",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-674802",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-674802",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fa682f8af7cd4c13c7751c7e04013881bb7477879d9a41b587770e995db3595c",
	            "SandboxKey": "/var/run/docker/netns/fa682f8af7cd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "40375"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "40376"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "40379"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "40377"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "40378"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-674802": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "5781a9c4ca19259a018dae251240a67b66da80fe7be0072f1f7a04b54b46de4f",
	                    "EndpointID": "ac6cd79e2fd3ea9a7822c749fdb308246197d3aff07dd9063e20f68e99ba28aa",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-674802",
	                        "dc3c31f51b66"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-674802 -n old-k8s-version-674802
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-674802 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-674802 logs -n 25: (2.501957355s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-219316                              | cert-expiration-219316   | jenkins | v1.34.0 | 28 Oct 24 11:22 UTC | 28 Oct 24 11:23 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | force-systemd-env-229837                               | force-systemd-env-229837 | jenkins | v1.34.0 | 28 Oct 24 11:23 UTC | 28 Oct 24 11:23 UTC |
	|         | ssh cat                                                |                          |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                          |         |         |                     |                     |
	| delete  | -p force-systemd-env-229837                            | force-systemd-env-229837 | jenkins | v1.34.0 | 28 Oct 24 11:23 UTC | 28 Oct 24 11:23 UTC |
	| start   | -p cert-options-136781                                 | cert-options-136781      | jenkins | v1.34.0 | 28 Oct 24 11:23 UTC | 28 Oct 24 11:23 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                          |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                          |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                          |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                          |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | cert-options-136781 ssh                                | cert-options-136781      | jenkins | v1.34.0 | 28 Oct 24 11:23 UTC | 28 Oct 24 11:23 UTC |
	|         | openssl x509 -text -noout -in                          |                          |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                          |         |         |                     |                     |
	| ssh     | -p cert-options-136781 -- sudo                         | cert-options-136781      | jenkins | v1.34.0 | 28 Oct 24 11:23 UTC | 28 Oct 24 11:23 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                          |         |         |                     |                     |
	| delete  | -p cert-options-136781                                 | cert-options-136781      | jenkins | v1.34.0 | 28 Oct 24 11:23 UTC | 28 Oct 24 11:23 UTC |
	| start   | -p old-k8s-version-674802                              | old-k8s-version-674802   | jenkins | v1.34.0 | 28 Oct 24 11:23 UTC | 28 Oct 24 11:26 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| start   | -p cert-expiration-219316                              | cert-expiration-219316   | jenkins | v1.34.0 | 28 Oct 24 11:26 UTC | 28 Oct 24 11:26 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-674802        | old-k8s-version-674802   | jenkins | v1.34.0 | 28 Oct 24 11:26 UTC | 28 Oct 24 11:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| delete  | -p cert-expiration-219316                              | cert-expiration-219316   | jenkins | v1.34.0 | 28 Oct 24 11:26 UTC | 28 Oct 24 11:26 UTC |
	| stop    | -p old-k8s-version-674802                              | old-k8s-version-674802   | jenkins | v1.34.0 | 28 Oct 24 11:26 UTC | 28 Oct 24 11:26 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| start   | -p no-preload-196138                                   | no-preload-196138        | jenkins | v1.34.0 | 28 Oct 24 11:26 UTC | 28 Oct 24 11:27 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                          |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-674802             | old-k8s-version-674802   | jenkins | v1.34.0 | 28 Oct 24 11:26 UTC | 28 Oct 24 11:26 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p old-k8s-version-674802                              | old-k8s-version-674802   | jenkins | v1.34.0 | 28 Oct 24 11:26 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-196138             | no-preload-196138        | jenkins | v1.34.0 | 28 Oct 24 11:27 UTC | 28 Oct 24 11:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p no-preload-196138                                   | no-preload-196138        | jenkins | v1.34.0 | 28 Oct 24 11:27 UTC | 28 Oct 24 11:27 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-196138                  | no-preload-196138        | jenkins | v1.34.0 | 28 Oct 24 11:27 UTC | 28 Oct 24 11:27 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p no-preload-196138                                   | no-preload-196138        | jenkins | v1.34.0 | 28 Oct 24 11:27 UTC | 28 Oct 24 11:32 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                          |         |         |                     |                     |
	| image   | no-preload-196138 image list                           | no-preload-196138        | jenkins | v1.34.0 | 28 Oct 24 11:32 UTC | 28 Oct 24 11:32 UTC |
	|         | --format=json                                          |                          |         |         |                     |                     |
	| pause   | -p no-preload-196138                                   | no-preload-196138        | jenkins | v1.34.0 | 28 Oct 24 11:32 UTC | 28 Oct 24 11:32 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| unpause | -p no-preload-196138                                   | no-preload-196138        | jenkins | v1.34.0 | 28 Oct 24 11:32 UTC | 28 Oct 24 11:32 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| delete  | -p no-preload-196138                                   | no-preload-196138        | jenkins | v1.34.0 | 28 Oct 24 11:32 UTC | 28 Oct 24 11:32 UTC |
	| delete  | -p no-preload-196138                                   | no-preload-196138        | jenkins | v1.34.0 | 28 Oct 24 11:32 UTC | 28 Oct 24 11:32 UTC |
	| start   | -p embed-certs-542883                                  | embed-certs-542883       | jenkins | v1.34.0 | 28 Oct 24 11:32 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                          |         |         |                     |                     |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 11:32:42
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.2 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 11:32:42.376785 1533911 out.go:345] Setting OutFile to fd 1 ...
	I1028 11:32:42.376998 1533911 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:32:42.377032 1533911 out.go:358] Setting ErrFile to fd 2...
	I1028 11:32:42.377051 1533911 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:32:42.377437 1533911 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-1313708/.minikube/bin
	I1028 11:32:42.378482 1533911 out.go:352] Setting JSON to false
	I1028 11:32:42.379728 1533911 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":148493,"bootTime":1729966670,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1028 11:32:42.379819 1533911 start.go:139] virtualization:  
	I1028 11:32:42.382343 1533911 out.go:177] * [embed-certs-542883] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1028 11:32:42.384065 1533911 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 11:32:42.384150 1533911 notify.go:220] Checking for updates...
	I1028 11:32:42.387557 1533911 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 11:32:42.389706 1533911 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19876-1313708/kubeconfig
	I1028 11:32:42.391924 1533911 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-1313708/.minikube
	I1028 11:32:42.393791 1533911 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1028 11:32:42.396204 1533911 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 11:32:42.399990 1533911 config.go:182] Loaded profile config "old-k8s-version-674802": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1028 11:32:42.400108 1533911 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 11:32:42.421793 1533911 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1028 11:32:42.421915 1533911 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 11:32:42.474704 1533911 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-28 11:32:42.464687361 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1028 11:32:42.474818 1533911 docker.go:318] overlay module found
	I1028 11:32:42.477367 1533911 out.go:177] * Using the docker driver based on user configuration
	I1028 11:32:42.479597 1533911 start.go:297] selected driver: docker
	I1028 11:32:42.479613 1533911 start.go:901] validating driver "docker" against <nil>
	I1028 11:32:42.479731 1533911 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 11:32:42.480449 1533911 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 11:32:42.542358 1533911 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-28 11:32:42.533551454 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1028 11:32:42.542574 1533911 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 11:32:42.542803 1533911 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:32:42.545418 1533911 out.go:177] * Using Docker driver with root privileges
	I1028 11:32:42.547774 1533911 cni.go:84] Creating CNI manager for ""
	I1028 11:32:42.547842 1533911 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1028 11:32:42.547856 1533911 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1028 11:32:42.547936 1533911 start.go:340] cluster config:
	{Name:embed-certs-542883 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-542883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:32:42.550617 1533911 out.go:177] * Starting "embed-certs-542883" primary control-plane node in "embed-certs-542883" cluster
	I1028 11:32:42.552984 1533911 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1028 11:32:42.555727 1533911 out.go:177] * Pulling base image v0.0.45-1729876044-19868 ...
	I1028 11:32:42.558029 1533911 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
	I1028 11:32:42.558080 1533911 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19876-1313708/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4
	I1028 11:32:42.558099 1533911 cache.go:56] Caching tarball of preloaded images
	I1028 11:32:42.558119 1533911 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local docker daemon
	I1028 11:32:42.558193 1533911 preload.go:172] Found /home/jenkins/minikube-integration/19876-1313708/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1028 11:32:42.558204 1533911 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on containerd
	I1028 11:32:42.558310 1533911 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/config.json ...
	I1028 11:32:42.558327 1533911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/config.json: {Name:mk163284fb8b825a2d09aa810291bae333e1b90f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:32:42.576536 1533911 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local docker daemon, skipping pull
	I1028 11:32:42.576561 1533911 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e exists in daemon, skipping load
	I1028 11:32:42.576581 1533911 cache.go:194] Successfully downloaded all kic artifacts
	I1028 11:32:42.576605 1533911 start.go:360] acquireMachinesLock for embed-certs-542883: {Name:mk38179026b4a8b0728f92075de25e9a2bfe102c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 11:32:42.577170 1533911 start.go:364] duration metric: took 536.672µs to acquireMachinesLock for "embed-certs-542883"
	I1028 11:32:42.577209 1533911 start.go:93] Provisioning new machine with config: &{Name:embed-certs-542883 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-542883 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1028 11:32:42.577289 1533911 start.go:125] createHost starting for "" (driver="docker")
	I1028 11:32:42.581759 1533911 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1028 11:32:42.582155 1533911 start.go:159] libmachine.API.Create for "embed-certs-542883" (driver="docker")
	I1028 11:32:42.582209 1533911 client.go:168] LocalClient.Create starting
	I1028 11:32:42.582346 1533911 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/ca.pem
	I1028 11:32:42.582388 1533911 main.go:141] libmachine: Decoding PEM data...
	I1028 11:32:42.582402 1533911 main.go:141] libmachine: Parsing certificate...
	I1028 11:32:42.582500 1533911 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/cert.pem
	I1028 11:32:42.582557 1533911 main.go:141] libmachine: Decoding PEM data...
	I1028 11:32:42.582571 1533911 main.go:141] libmachine: Parsing certificate...
	I1028 11:32:42.583018 1533911 cli_runner.go:164] Run: docker network inspect embed-certs-542883 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1028 11:32:42.608805 1533911 cli_runner.go:211] docker network inspect embed-certs-542883 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1028 11:32:42.608899 1533911 network_create.go:284] running [docker network inspect embed-certs-542883] to gather additional debugging logs...
	I1028 11:32:42.608916 1533911 cli_runner.go:164] Run: docker network inspect embed-certs-542883
	W1028 11:32:42.623476 1533911 cli_runner.go:211] docker network inspect embed-certs-542883 returned with exit code 1
	I1028 11:32:42.623505 1533911 network_create.go:287] error running [docker network inspect embed-certs-542883]: docker network inspect embed-certs-542883: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-542883 not found
	I1028 11:32:42.623519 1533911 network_create.go:289] output of [docker network inspect embed-certs-542883]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-542883 not found
	
	** /stderr **
	I1028 11:32:42.623615 1533911 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1028 11:32:42.640450 1533911 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e8a2656e00eb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:39:ff:27:31} reservation:<nil>}
	I1028 11:32:42.640917 1533911 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e05de1d17c9e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:94:c3:02:96} reservation:<nil>}
	I1028 11:32:42.641340 1533911 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-1756b1c23cfa IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:4f:9c:02:cb} reservation:<nil>}
	I1028 11:32:42.641708 1533911 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-5781a9c4ca19 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:50:3d:78:b9} reservation:<nil>}
	I1028 11:32:42.642254 1533911 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a184c0}
	I1028 11:32:42.642301 1533911 network_create.go:124] attempt to create docker network embed-certs-542883 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1028 11:32:42.642376 1533911 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-542883 embed-certs-542883
	I1028 11:32:42.720532 1533911 network_create.go:108] docker network embed-certs-542883 192.168.85.0/24 created
	I1028 11:32:42.720566 1533911 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-542883" container
	I1028 11:32:42.720650 1533911 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1028 11:32:42.735744 1533911 cli_runner.go:164] Run: docker volume create embed-certs-542883 --label name.minikube.sigs.k8s.io=embed-certs-542883 --label created_by.minikube.sigs.k8s.io=true
	I1028 11:32:42.754424 1533911 oci.go:103] Successfully created a docker volume embed-certs-542883
	I1028 11:32:42.754524 1533911 cli_runner.go:164] Run: docker run --rm --name embed-certs-542883-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-542883 --entrypoint /usr/bin/test -v embed-certs-542883:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e -d /var/lib
	I1028 11:32:43.427737 1533911 oci.go:107] Successfully prepared a docker volume embed-certs-542883
	I1028 11:32:43.427783 1533911 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
	I1028 11:32:43.427803 1533911 kic.go:194] Starting extracting preloaded images to volume ...
	I1028 11:32:43.427877 1533911 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19876-1313708/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-542883:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e -I lz4 -xf /preloaded.tar -C /extractDir
	I1028 11:32:45.308563 1522650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:32:45.323590 1522650 api_server.go:72] duration metric: took 5m53.809706364s to wait for apiserver process to appear ...
	I1028 11:32:45.323614 1522650 api_server.go:88] waiting for apiserver healthz status ...
	I1028 11:32:45.323723 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1028 11:32:45.323782 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 11:32:45.369849 1522650 cri.go:89] found id: "c02d779e69c4a6181f499ea147b62985bdd68ffb9d61fe7dab43115ca4318de6"
	I1028 11:32:45.369868 1522650 cri.go:89] found id: "ba54ab63823c2fcfe3e9bc95fca852e480e0d8fae4071a23e1fc38d3e74384cc"
	I1028 11:32:45.369873 1522650 cri.go:89] found id: ""
	I1028 11:32:45.369880 1522650 logs.go:282] 2 containers: [c02d779e69c4a6181f499ea147b62985bdd68ffb9d61fe7dab43115ca4318de6 ba54ab63823c2fcfe3e9bc95fca852e480e0d8fae4071a23e1fc38d3e74384cc]
	I1028 11:32:45.369934 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:45.374584 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:45.379004 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1028 11:32:45.379074 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 11:32:45.433346 1522650 cri.go:89] found id: "6208543cc8b3c7edcccd800e0f9d98e845390bf870426de3226d81781dce3148"
	I1028 11:32:45.433423 1522650 cri.go:89] found id: "01a108b46e6f4f9217c1f90a9611bdbc7956ad16edbfd8093ad46cc6ef34b232"
	I1028 11:32:45.433442 1522650 cri.go:89] found id: ""
	I1028 11:32:45.433462 1522650 logs.go:282] 2 containers: [6208543cc8b3c7edcccd800e0f9d98e845390bf870426de3226d81781dce3148 01a108b46e6f4f9217c1f90a9611bdbc7956ad16edbfd8093ad46cc6ef34b232]
	I1028 11:32:45.433546 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:45.438315 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:45.441971 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1028 11:32:45.442046 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 11:32:45.507419 1522650 cri.go:89] found id: "b864ea5367f07235e01b7c4c4545bda20ba5924d99b8e542c0315227a77c2c59"
	I1028 11:32:45.507441 1522650 cri.go:89] found id: "2a9df06520f732f1766508da84b61f745cb047b5f7bcf5bf3ef9cb3891f6239f"
	I1028 11:32:45.507446 1522650 cri.go:89] found id: ""
	I1028 11:32:45.507453 1522650 logs.go:282] 2 containers: [b864ea5367f07235e01b7c4c4545bda20ba5924d99b8e542c0315227a77c2c59 2a9df06520f732f1766508da84b61f745cb047b5f7bcf5bf3ef9cb3891f6239f]
	I1028 11:32:45.507510 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:45.513603 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:45.517373 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1028 11:32:45.517452 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 11:32:45.565346 1522650 cri.go:89] found id: "31281b2de0e80c98175b18b80c8ece18d25bb88841661719ca7805a5cb795824"
	I1028 11:32:45.565381 1522650 cri.go:89] found id: "857580d96023ba113555b54f38493703fce44c36a25523c35c1fd07c51eee056"
	I1028 11:32:45.565386 1522650 cri.go:89] found id: ""
	I1028 11:32:45.565393 1522650 logs.go:282] 2 containers: [31281b2de0e80c98175b18b80c8ece18d25bb88841661719ca7805a5cb795824 857580d96023ba113555b54f38493703fce44c36a25523c35c1fd07c51eee056]
	I1028 11:32:45.565455 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:45.569124 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:45.572626 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1028 11:32:45.572699 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 11:32:45.624046 1522650 cri.go:89] found id: "c0ed41137fbff35ffcb34df99174bf1cb9e6e2fda2154d421ab797a438e507bf"
	I1028 11:32:45.624079 1522650 cri.go:89] found id: "8d4b3dad3dd90f3ec833f354de4e8225bdaf07199d5245c988b1fdbc527c1015"
	I1028 11:32:45.624084 1522650 cri.go:89] found id: ""
	I1028 11:32:45.624091 1522650 logs.go:282] 2 containers: [c0ed41137fbff35ffcb34df99174bf1cb9e6e2fda2154d421ab797a438e507bf 8d4b3dad3dd90f3ec833f354de4e8225bdaf07199d5245c988b1fdbc527c1015]
	I1028 11:32:45.624152 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:45.627765 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:45.631106 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 11:32:45.631183 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 11:32:45.680421 1522650 cri.go:89] found id: "056d20453e357e86aa3e62b0dd7d945c40ec05cc5f462727941eac3714718438"
	I1028 11:32:45.680444 1522650 cri.go:89] found id: "4937ca78533bbe1e9024be3e8c38035f4fb621e9cfcd8ef6fc974857b5f788d7"
	I1028 11:32:45.680460 1522650 cri.go:89] found id: ""
	I1028 11:32:45.680468 1522650 logs.go:282] 2 containers: [056d20453e357e86aa3e62b0dd7d945c40ec05cc5f462727941eac3714718438 4937ca78533bbe1e9024be3e8c38035f4fb621e9cfcd8ef6fc974857b5f788d7]
	I1028 11:32:45.680531 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:45.684137 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:45.687407 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1028 11:32:45.687486 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 11:32:45.741649 1522650 cri.go:89] found id: "42478c583a7df5e62ae3718bc78fb6be211abb490f9190466870859ec29e3bf3"
	I1028 11:32:45.741671 1522650 cri.go:89] found id: "120e0085c59b7ce7fd3c7afbb14ea7637d4c18b660f3d35631be06f9007e3a33"
	I1028 11:32:45.741675 1522650 cri.go:89] found id: ""
	I1028 11:32:45.741683 1522650 logs.go:282] 2 containers: [42478c583a7df5e62ae3718bc78fb6be211abb490f9190466870859ec29e3bf3 120e0085c59b7ce7fd3c7afbb14ea7637d4c18b660f3d35631be06f9007e3a33]
	I1028 11:32:45.741741 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:45.745863 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:45.749779 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 11:32:45.749843 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 11:32:45.801413 1522650 cri.go:89] found id: "9666309986efcb4076982c7df1d9e0c9f905cbcee4a0e3d7a1dcd2ab0132348b"
	I1028 11:32:45.801471 1522650 cri.go:89] found id: ""
	I1028 11:32:45.801481 1522650 logs.go:282] 1 containers: [9666309986efcb4076982c7df1d9e0c9f905cbcee4a0e3d7a1dcd2ab0132348b]
	I1028 11:32:45.801539 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:45.805656 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1028 11:32:45.805718 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1028 11:32:45.904645 1522650 cri.go:89] found id: "af354fdce961d0c931e3e7b6826943560aa505683dbea93dedbce9a94105e0f8"
	I1028 11:32:45.904670 1522650 cri.go:89] found id: "e4aa22206b37d13d9665658eeeb808da6cd7c1789a0887c07a5f6460c9dd38f5"
	I1028 11:32:45.904675 1522650 cri.go:89] found id: ""
	I1028 11:32:45.904682 1522650 logs.go:282] 2 containers: [af354fdce961d0c931e3e7b6826943560aa505683dbea93dedbce9a94105e0f8 e4aa22206b37d13d9665658eeeb808da6cd7c1789a0887c07a5f6460c9dd38f5]
	I1028 11:32:45.904738 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:45.910966 1522650 ssh_runner.go:195] Run: which crictl
	I1028 11:32:45.917821 1522650 logs.go:123] Gathering logs for coredns [2a9df06520f732f1766508da84b61f745cb047b5f7bcf5bf3ef9cb3891f6239f] ...
	I1028 11:32:45.917843 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a9df06520f732f1766508da84b61f745cb047b5f7bcf5bf3ef9cb3891f6239f"
	I1028 11:32:45.972711 1522650 logs.go:123] Gathering logs for kindnet [120e0085c59b7ce7fd3c7afbb14ea7637d4c18b660f3d35631be06f9007e3a33] ...
	I1028 11:32:45.972737 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 120e0085c59b7ce7fd3c7afbb14ea7637d4c18b660f3d35631be06f9007e3a33"
	I1028 11:32:46.067160 1522650 logs.go:123] Gathering logs for kubelet ...
	I1028 11:32:46.067189 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1028 11:32:46.128234 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.736433     662 reflector.go:138] object-"default"/"default-token-rkh5t": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-rkh5t" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-674802' and this object
	W1028 11:32:46.128574 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.767488     662 reflector.go:138] object-"kube-system"/"metrics-server-token-bnmqq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-bnmqq" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-674802' and this object
	W1028 11:32:46.128790 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.769521     662 reflector.go:138] object-"kube-system"/"kindnet-token-rljtj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-rljtj" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-674802' and this object
	W1028 11:32:46.129006 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.783592     662 reflector.go:138] object-"kube-system"/"kube-proxy-token-v6b5p": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-v6b5p" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-674802' and this object
	W1028 11:32:46.129208 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.786532     662 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-674802' and this object
	W1028 11:32:46.129433 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.786656     662 reflector.go:138] object-"kube-system"/"storage-provisioner-token-7s7sg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-7s7sg" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-674802' and this object
	W1028 11:32:46.129655 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.786717     662 reflector.go:138] object-"kube-system"/"coredns-token-t6lq7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-t6lq7" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-674802' and this object
	W1028 11:32:46.129858 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.786765     662 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-674802' and this object
	W1028 11:32:46.140618 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:13 old-k8s-version-674802 kubelet[662]: E1028 11:27:13.704595     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1028 11:32:46.143017 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:14 old-k8s-version-674802 kubelet[662]: E1028 11:27:14.680234     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:46.145830 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:27 old-k8s-version-674802 kubelet[662]: E1028 11:27:27.405191     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1028 11:32:46.148040 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:36 old-k8s-version-674802 kubelet[662]: E1028 11:27:36.774117     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.148376 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:37 old-k8s-version-674802 kubelet[662]: E1028 11:27:37.779544     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.148704 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:38 old-k8s-version-674802 kubelet[662]: E1028 11:27:38.781616     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.148892 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:39 old-k8s-version-674802 kubelet[662]: E1028 11:27:39.404833     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:46.149721 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:45 old-k8s-version-674802 kubelet[662]: E1028 11:27:45.806023     662 pod_workers.go:191] Error syncing pod eb6e0fb4-e030-4eb7-8b96-477de7691df6 ("storage-provisioner_kube-system(eb6e0fb4-e030-4eb7-8b96-477de7691df6)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(eb6e0fb4-e030-4eb7-8b96-477de7691df6)"
	W1028 11:32:46.152611 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:51 old-k8s-version-674802 kubelet[662]: E1028 11:27:51.412111     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1028 11:32:46.153207 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:52 old-k8s-version-674802 kubelet[662]: E1028 11:27:52.828529     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.153668 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:58 old-k8s-version-674802 kubelet[662]: E1028 11:27:58.495326     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.153849 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:06 old-k8s-version-674802 kubelet[662]: E1028 11:28:06.398915     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:46.154173 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:11 old-k8s-version-674802 kubelet[662]: E1028 11:28:11.397340     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.154353 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:18 old-k8s-version-674802 kubelet[662]: E1028 11:28:18.397191     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:46.154935 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:25 old-k8s-version-674802 kubelet[662]: E1028 11:28:25.930176     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.155258 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:28 old-k8s-version-674802 kubelet[662]: E1028 11:28:28.494838     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.157743 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:32 old-k8s-version-674802 kubelet[662]: E1028 11:28:32.410167     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1028 11:32:46.158074 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:39 old-k8s-version-674802 kubelet[662]: E1028 11:28:39.398920     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.158257 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:43 old-k8s-version-674802 kubelet[662]: E1028 11:28:43.399781     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:46.158584 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:50 old-k8s-version-674802 kubelet[662]: E1028 11:28:50.396875     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.158767 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:56 old-k8s-version-674802 kubelet[662]: E1028 11:28:56.397361     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:46.159090 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:02 old-k8s-version-674802 kubelet[662]: E1028 11:29:02.396819     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.159293 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:08 old-k8s-version-674802 kubelet[662]: E1028 11:29:08.398191     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:46.159946 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:14 old-k8s-version-674802 kubelet[662]: E1028 11:29:14.078563     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.160290 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:18 old-k8s-version-674802 kubelet[662]: E1028 11:29:18.494837     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.160481 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:22 old-k8s-version-674802 kubelet[662]: E1028 11:29:22.397401     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:46.160804 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:29 old-k8s-version-674802 kubelet[662]: E1028 11:29:29.397589     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.160987 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:33 old-k8s-version-674802 kubelet[662]: E1028 11:29:33.397285     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:46.161311 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:40 old-k8s-version-674802 kubelet[662]: E1028 11:29:40.396913     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.161495 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:45 old-k8s-version-674802 kubelet[662]: E1028 11:29:45.397674     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:46.161821 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:52 old-k8s-version-674802 kubelet[662]: E1028 11:29:52.396836     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.164259 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:57 old-k8s-version-674802 kubelet[662]: E1028 11:29:57.431254     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1028 11:32:46.164587 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:06 old-k8s-version-674802 kubelet[662]: E1028 11:30:06.396711     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.164771 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:12 old-k8s-version-674802 kubelet[662]: E1028 11:30:12.397420     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:46.165094 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:19 old-k8s-version-674802 kubelet[662]: E1028 11:30:19.397829     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.165276 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:23 old-k8s-version-674802 kubelet[662]: E1028 11:30:23.400863     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:46.165874 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:35 old-k8s-version-674802 kubelet[662]: E1028 11:30:35.284773     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.166056 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:37 old-k8s-version-674802 kubelet[662]: E1028 11:30:37.398847     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:46.166378 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:38 old-k8s-version-674802 kubelet[662]: E1028 11:30:38.494821     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.166560 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:52 old-k8s-version-674802 kubelet[662]: E1028 11:30:52.398247     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:46.166884 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:53 old-k8s-version-674802 kubelet[662]: E1028 11:30:53.397067     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.167207 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:04 old-k8s-version-674802 kubelet[662]: E1028 11:31:04.396824     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.167388 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:04 old-k8s-version-674802 kubelet[662]: E1028 11:31:04.398636     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:46.167725 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:18 old-k8s-version-674802 kubelet[662]: E1028 11:31:18.397153     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.167908 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:18 old-k8s-version-674802 kubelet[662]: E1028 11:31:18.397448     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:46.168236 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:30 old-k8s-version-674802 kubelet[662]: E1028 11:31:30.396768     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.168418 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:32 old-k8s-version-674802 kubelet[662]: E1028 11:31:32.397834     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:46.168745 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:42 old-k8s-version-674802 kubelet[662]: E1028 11:31:42.396795     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.168925 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:47 old-k8s-version-674802 kubelet[662]: E1028 11:31:47.398090     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:46.169249 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:56 old-k8s-version-674802 kubelet[662]: E1028 11:31:56.396772     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.169434 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:58 old-k8s-version-674802 kubelet[662]: E1028 11:31:58.397362     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:46.169758 1522650 logs.go:138] Found kubelet problem: Oct 28 11:32:07 old-k8s-version-674802 kubelet[662]: E1028 11:32:07.399844     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.169942 1522650 logs.go:138] Found kubelet problem: Oct 28 11:32:13 old-k8s-version-674802 kubelet[662]: E1028 11:32:13.397227     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:46.170339 1522650 logs.go:138] Found kubelet problem: Oct 28 11:32:19 old-k8s-version-674802 kubelet[662]: E1028 11:32:19.398441     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.170526 1522650 logs.go:138] Found kubelet problem: Oct 28 11:32:28 old-k8s-version-674802 kubelet[662]: E1028 11:32:28.398320     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:46.170854 1522650 logs.go:138] Found kubelet problem: Oct 28 11:32:34 old-k8s-version-674802 kubelet[662]: E1028 11:32:34.396869     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:46.173305 1522650 logs.go:138] Found kubelet problem: Oct 28 11:32:39 old-k8s-version-674802 kubelet[662]: E1028 11:32:39.419884     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	I1028 11:32:46.173316 1522650 logs.go:123] Gathering logs for etcd [6208543cc8b3c7edcccd800e0f9d98e845390bf870426de3226d81781dce3148] ...
	I1028 11:32:46.173330 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6208543cc8b3c7edcccd800e0f9d98e845390bf870426de3226d81781dce3148"
	I1028 11:32:46.223969 1522650 logs.go:123] Gathering logs for kube-proxy [c0ed41137fbff35ffcb34df99174bf1cb9e6e2fda2154d421ab797a438e507bf] ...
	I1028 11:32:46.224006 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0ed41137fbff35ffcb34df99174bf1cb9e6e2fda2154d421ab797a438e507bf"
	I1028 11:32:46.277259 1522650 logs.go:123] Gathering logs for kubernetes-dashboard [9666309986efcb4076982c7df1d9e0c9f905cbcee4a0e3d7a1dcd2ab0132348b] ...
	I1028 11:32:46.277289 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9666309986efcb4076982c7df1d9e0c9f905cbcee4a0e3d7a1dcd2ab0132348b"
	I1028 11:32:46.337485 1522650 logs.go:123] Gathering logs for storage-provisioner [e4aa22206b37d13d9665658eeeb808da6cd7c1789a0887c07a5f6460c9dd38f5] ...
	I1028 11:32:46.337520 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e4aa22206b37d13d9665658eeeb808da6cd7c1789a0887c07a5f6460c9dd38f5"
	I1028 11:32:46.395372 1522650 logs.go:123] Gathering logs for container status ...
	I1028 11:32:46.395422 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 11:32:46.450094 1522650 logs.go:123] Gathering logs for describe nodes ...
	I1028 11:32:46.450127 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 11:32:46.647254 1522650 logs.go:123] Gathering logs for kube-scheduler [31281b2de0e80c98175b18b80c8ece18d25bb88841661719ca7805a5cb795824] ...
	I1028 11:32:46.647867 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31281b2de0e80c98175b18b80c8ece18d25bb88841661719ca7805a5cb795824"
	I1028 11:32:46.698393 1522650 logs.go:123] Gathering logs for kube-controller-manager [056d20453e357e86aa3e62b0dd7d945c40ec05cc5f462727941eac3714718438] ...
	I1028 11:32:46.698421 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 056d20453e357e86aa3e62b0dd7d945c40ec05cc5f462727941eac3714718438"
	I1028 11:32:46.756978 1522650 logs.go:123] Gathering logs for kube-controller-manager [4937ca78533bbe1e9024be3e8c38035f4fb621e9cfcd8ef6fc974857b5f788d7] ...
	I1028 11:32:46.757015 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4937ca78533bbe1e9024be3e8c38035f4fb621e9cfcd8ef6fc974857b5f788d7"
	I1028 11:32:46.831122 1522650 logs.go:123] Gathering logs for kindnet [42478c583a7df5e62ae3718bc78fb6be211abb490f9190466870859ec29e3bf3] ...
	I1028 11:32:46.831163 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42478c583a7df5e62ae3718bc78fb6be211abb490f9190466870859ec29e3bf3"
	I1028 11:32:46.880307 1522650 logs.go:123] Gathering logs for etcd [01a108b46e6f4f9217c1f90a9611bdbc7956ad16edbfd8093ad46cc6ef34b232] ...
	I1028 11:32:46.880340 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01a108b46e6f4f9217c1f90a9611bdbc7956ad16edbfd8093ad46cc6ef34b232"
	I1028 11:32:46.936132 1522650 logs.go:123] Gathering logs for kube-scheduler [857580d96023ba113555b54f38493703fce44c36a25523c35c1fd07c51eee056] ...
	I1028 11:32:46.936165 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 857580d96023ba113555b54f38493703fce44c36a25523c35c1fd07c51eee056"
	I1028 11:32:46.982104 1522650 logs.go:123] Gathering logs for kube-apiserver [ba54ab63823c2fcfe3e9bc95fca852e480e0d8fae4071a23e1fc38d3e74384cc] ...
	I1028 11:32:46.982133 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba54ab63823c2fcfe3e9bc95fca852e480e0d8fae4071a23e1fc38d3e74384cc"
	I1028 11:32:47.048875 1522650 logs.go:123] Gathering logs for coredns [b864ea5367f07235e01b7c4c4545bda20ba5924d99b8e542c0315227a77c2c59] ...
	I1028 11:32:47.048911 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b864ea5367f07235e01b7c4c4545bda20ba5924d99b8e542c0315227a77c2c59"
	I1028 11:32:47.093129 1522650 logs.go:123] Gathering logs for kube-proxy [8d4b3dad3dd90f3ec833f354de4e8225bdaf07199d5245c988b1fdbc527c1015] ...
	I1028 11:32:47.093157 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d4b3dad3dd90f3ec833f354de4e8225bdaf07199d5245c988b1fdbc527c1015"
	I1028 11:32:47.132824 1522650 logs.go:123] Gathering logs for storage-provisioner [af354fdce961d0c931e3e7b6826943560aa505683dbea93dedbce9a94105e0f8] ...
	I1028 11:32:47.132849 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af354fdce961d0c931e3e7b6826943560aa505683dbea93dedbce9a94105e0f8"
	I1028 11:32:47.172011 1522650 logs.go:123] Gathering logs for containerd ...
	I1028 11:32:47.172037 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1028 11:32:47.239434 1522650 logs.go:123] Gathering logs for dmesg ...
	I1028 11:32:47.239469 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 11:32:47.257467 1522650 logs.go:123] Gathering logs for kube-apiserver [c02d779e69c4a6181f499ea147b62985bdd68ffb9d61fe7dab43115ca4318de6] ...
	I1028 11:32:47.257498 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c02d779e69c4a6181f499ea147b62985bdd68ffb9d61fe7dab43115ca4318de6"
	I1028 11:32:47.317252 1522650 out.go:358] Setting ErrFile to fd 2...
	I1028 11:32:47.317286 1522650 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1028 11:32:47.317350 1522650 out.go:270] X Problems detected in kubelet:
	W1028 11:32:47.317369 1522650 out.go:270]   Oct 28 11:32:13 old-k8s-version-674802 kubelet[662]: E1028 11:32:13.397227     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:47.317384 1522650 out.go:270]   Oct 28 11:32:19 old-k8s-version-674802 kubelet[662]: E1028 11:32:19.398441     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:47.317397 1522650 out.go:270]   Oct 28 11:32:28 old-k8s-version-674802 kubelet[662]: E1028 11:32:28.398320     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 11:32:47.317405 1522650 out.go:270]   Oct 28 11:32:34 old-k8s-version-674802 kubelet[662]: E1028 11:32:34.396869     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	W1028 11:32:47.317419 1522650 out.go:270]   Oct 28 11:32:39 old-k8s-version-674802 kubelet[662]: E1028 11:32:39.419884     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	I1028 11:32:47.317439 1522650 out.go:358] Setting ErrFile to fd 2...
	I1028 11:32:47.317446 1522650 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:32:48.132711 1533911 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19876-1313708/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-542883:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e -I lz4 -xf /preloaded.tar -C /extractDir: (4.704798769s)
	I1028 11:32:48.132743 1533911 kic.go:203] duration metric: took 4.704935744s to extract preloaded images to volume ...
	W1028 11:32:48.132879 1533911 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1028 11:32:48.132988 1533911 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1028 11:32:48.191526 1533911 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-542883 --name embed-certs-542883 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-542883 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-542883 --network embed-certs-542883 --ip 192.168.85.2 --volume embed-certs-542883:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e
	I1028 11:32:48.518235 1533911 cli_runner.go:164] Run: docker container inspect embed-certs-542883 --format={{.State.Running}}
	I1028 11:32:48.536429 1533911 cli_runner.go:164] Run: docker container inspect embed-certs-542883 --format={{.State.Status}}
	I1028 11:32:48.560183 1533911 cli_runner.go:164] Run: docker exec embed-certs-542883 stat /var/lib/dpkg/alternatives/iptables
	I1028 11:32:48.632490 1533911 oci.go:144] the created container "embed-certs-542883" has a running status.
	I1028 11:32:48.632524 1533911 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19876-1313708/.minikube/machines/embed-certs-542883/id_rsa...
	I1028 11:32:49.179713 1533911 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19876-1313708/.minikube/machines/embed-certs-542883/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1028 11:32:49.207087 1533911 cli_runner.go:164] Run: docker container inspect embed-certs-542883 --format={{.State.Status}}
	I1028 11:32:49.229265 1533911 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1028 11:32:49.229289 1533911 kic_runner.go:114] Args: [docker exec --privileged embed-certs-542883 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1028 11:32:49.339865 1533911 cli_runner.go:164] Run: docker container inspect embed-certs-542883 --format={{.State.Status}}
	I1028 11:32:49.367082 1533911 machine.go:93] provisionDockerMachine start ...
	I1028 11:32:49.367181 1533911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-542883
	I1028 11:32:49.386731 1533911 main.go:141] libmachine: Using SSH client type: native
	I1028 11:32:49.387020 1533911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil>  [] 0s} 127.0.0.1 40385 <nil> <nil>}
	I1028 11:32:49.387038 1533911 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 11:32:49.541148 1533911 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-542883
	
	I1028 11:32:49.541190 1533911 ubuntu.go:169] provisioning hostname "embed-certs-542883"
	I1028 11:32:49.541262 1533911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-542883
	I1028 11:32:49.558765 1533911 main.go:141] libmachine: Using SSH client type: native
	I1028 11:32:49.559012 1533911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil>  [] 0s} 127.0.0.1 40385 <nil> <nil>}
	I1028 11:32:49.559030 1533911 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-542883 && echo "embed-certs-542883" | sudo tee /etc/hostname
	I1028 11:32:49.722988 1533911 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-542883
	
	I1028 11:32:49.723130 1533911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-542883
	I1028 11:32:49.750433 1533911 main.go:141] libmachine: Using SSH client type: native
	I1028 11:32:49.750748 1533911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil>  [] 0s} 127.0.0.1 40385 <nil> <nil>}
	I1028 11:32:49.750781 1533911 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-542883' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-542883/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-542883' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 11:32:49.880363 1533911 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:32:49.880438 1533911 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19876-1313708/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-1313708/.minikube}
	I1028 11:32:49.880496 1533911 ubuntu.go:177] setting up certificates
	I1028 11:32:49.880531 1533911 provision.go:84] configureAuth start
	I1028 11:32:49.880628 1533911 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-542883
	I1028 11:32:49.902650 1533911 provision.go:143] copyHostCerts
	I1028 11:32:49.902709 1533911 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-1313708/.minikube/ca.pem, removing ...
	I1028 11:32:49.902719 1533911 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-1313708/.minikube/ca.pem
	I1028 11:32:49.902793 1533911 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-1313708/.minikube/ca.pem (1078 bytes)
	I1028 11:32:49.902878 1533911 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-1313708/.minikube/cert.pem, removing ...
	I1028 11:32:49.902883 1533911 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-1313708/.minikube/cert.pem
	I1028 11:32:49.902907 1533911 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-1313708/.minikube/cert.pem (1123 bytes)
	I1028 11:32:49.902960 1533911 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-1313708/.minikube/key.pem, removing ...
	I1028 11:32:49.902965 1533911 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-1313708/.minikube/key.pem
	I1028 11:32:49.902986 1533911 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-1313708/.minikube/key.pem (1675 bytes)
	I1028 11:32:49.903037 1533911 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-1313708/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-1313708/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-1313708/.minikube/certs/ca-key.pem org=jenkins.embed-certs-542883 san=[127.0.0.1 192.168.85.2 embed-certs-542883 localhost minikube]
	I1028 11:32:50.055296 1533911 provision.go:177] copyRemoteCerts
	I1028 11:32:50.055373 1533911 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 11:32:50.055423 1533911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-542883
	I1028 11:32:50.071940 1533911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40385 SSHKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/embed-certs-542883/id_rsa Username:docker}
	I1028 11:32:50.165089 1533911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 11:32:50.193238 1533911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1028 11:32:50.219000 1533911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 11:32:50.245276 1533911 provision.go:87] duration metric: took 364.718072ms to configureAuth
	I1028 11:32:50.245350 1533911 ubuntu.go:193] setting minikube options for container-runtime
	I1028 11:32:50.245560 1533911 config.go:182] Loaded profile config "embed-certs-542883": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1028 11:32:50.245576 1533911 machine.go:96] duration metric: took 878.470456ms to provisionDockerMachine
	I1028 11:32:50.245584 1533911 client.go:171] duration metric: took 7.663368273s to LocalClient.Create
	I1028 11:32:50.245613 1533911 start.go:167] duration metric: took 7.663459974s to libmachine.API.Create "embed-certs-542883"
	I1028 11:32:50.245624 1533911 start.go:293] postStartSetup for "embed-certs-542883" (driver="docker")
	I1028 11:32:50.245634 1533911 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 11:32:50.245699 1533911 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 11:32:50.245743 1533911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-542883
	I1028 11:32:50.263513 1533911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40385 SSHKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/embed-certs-542883/id_rsa Username:docker}
	I1028 11:32:50.360793 1533911 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 11:32:50.363963 1533911 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1028 11:32:50.364009 1533911 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1028 11:32:50.364020 1533911 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1028 11:32:50.364027 1533911 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1028 11:32:50.364041 1533911 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-1313708/.minikube/addons for local assets ...
	I1028 11:32:50.364099 1533911 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-1313708/.minikube/files for local assets ...
	I1028 11:32:50.364185 1533911 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-1313708/.minikube/files/etc/ssl/certs/13190982.pem -> 13190982.pem in /etc/ssl/certs
	I1028 11:32:50.364299 1533911 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 11:32:50.373027 1533911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/files/etc/ssl/certs/13190982.pem --> /etc/ssl/certs/13190982.pem (1708 bytes)
	I1028 11:32:50.402964 1533911 start.go:296] duration metric: took 157.325869ms for postStartSetup
	I1028 11:32:50.403338 1533911 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-542883
	I1028 11:32:50.420444 1533911 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/config.json ...
	I1028 11:32:50.420742 1533911 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1028 11:32:50.420795 1533911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-542883
	I1028 11:32:50.439003 1533911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40385 SSHKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/embed-certs-542883/id_rsa Username:docker}
	I1028 11:32:50.528289 1533911 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1028 11:32:50.532792 1533911 start.go:128] duration metric: took 7.955483107s to createHost
	I1028 11:32:50.532817 1533911 start.go:83] releasing machines lock for "embed-certs-542883", held for 7.955628969s
	I1028 11:32:50.532887 1533911 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-542883
	I1028 11:32:50.548916 1533911 ssh_runner.go:195] Run: cat /version.json
	I1028 11:32:50.548987 1533911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-542883
	I1028 11:32:50.548916 1533911 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 11:32:50.549117 1533911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-542883
	I1028 11:32:50.572058 1533911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40385 SSHKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/embed-certs-542883/id_rsa Username:docker}
	I1028 11:32:50.573233 1533911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40385 SSHKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/embed-certs-542883/id_rsa Username:docker}
	I1028 11:32:50.796632 1533911 ssh_runner.go:195] Run: systemctl --version
	I1028 11:32:50.800971 1533911 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1028 11:32:50.805119 1533911 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1028 11:32:50.830749 1533911 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1028 11:32:50.830829 1533911 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 11:32:50.857686 1533911 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1028 11:32:50.857712 1533911 start.go:495] detecting cgroup driver to use...
	I1028 11:32:50.857768 1533911 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1028 11:32:50.857835 1533911 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1028 11:32:50.869945 1533911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1028 11:32:50.881779 1533911 docker.go:217] disabling cri-docker service (if available) ...
	I1028 11:32:50.881842 1533911 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 11:32:50.895382 1533911 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 11:32:50.914962 1533911 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 11:32:51.006677 1533911 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 11:32:51.109510 1533911 docker.go:233] disabling docker service ...
	I1028 11:32:51.109629 1533911 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 11:32:51.131884 1533911 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 11:32:51.144891 1533911 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 11:32:51.236896 1533911 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 11:32:51.321851 1533911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 11:32:51.333782 1533911 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:32:51.350766 1533911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1028 11:32:51.361252 1533911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1028 11:32:51.371723 1533911 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1028 11:32:51.371837 1533911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1028 11:32:51.382019 1533911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 11:32:51.392182 1533911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1028 11:32:51.407931 1533911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 11:32:51.431278 1533911 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 11:32:51.441203 1533911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1028 11:32:51.451416 1533911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1028 11:32:51.462214 1533911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1028 11:32:51.473201 1533911 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 11:32:51.483086 1533911 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 11:32:51.492521 1533911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:32:51.586221 1533911 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1028 11:32:51.744597 1533911 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1028 11:32:51.744720 1533911 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1028 11:32:51.748313 1533911 start.go:563] Will wait 60s for crictl version
	I1028 11:32:51.748395 1533911 ssh_runner.go:195] Run: which crictl
	I1028 11:32:51.751653 1533911 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 11:32:51.792849 1533911 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I1028 11:32:51.792932 1533911 ssh_runner.go:195] Run: containerd --version
	I1028 11:32:51.818888 1533911 ssh_runner.go:195] Run: containerd --version
	I1028 11:32:51.843993 1533911 out.go:177] * Preparing Kubernetes v1.31.2 on containerd 1.7.22 ...
	I1028 11:32:51.845426 1533911 cli_runner.go:164] Run: docker network inspect embed-certs-542883 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1028 11:32:51.861318 1533911 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1028 11:32:51.865183 1533911 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:32:51.876032 1533911 kubeadm.go:883] updating cluster {Name:embed-certs-542883 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-542883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 11:32:51.876165 1533911 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
	I1028 11:32:51.876233 1533911 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 11:32:51.918199 1533911 containerd.go:627] all images are preloaded for containerd runtime.
	I1028 11:32:51.918222 1533911 containerd.go:534] Images already preloaded, skipping extraction
	I1028 11:32:51.918282 1533911 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 11:32:51.961431 1533911 containerd.go:627] all images are preloaded for containerd runtime.
	I1028 11:32:51.961454 1533911 cache_images.go:84] Images are preloaded, skipping loading
	I1028 11:32:51.961462 1533911 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.31.2 containerd true true} ...
	I1028 11:32:51.961557 1533911 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-542883 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-542883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 11:32:51.961623 1533911 ssh_runner.go:195] Run: sudo crictl info
	I1028 11:32:52.000108 1533911 cni.go:84] Creating CNI manager for ""
	I1028 11:32:52.000130 1533911 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1028 11:32:52.000140 1533911 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 11:32:52.000161 1533911 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-542883 NodeName:embed-certs-542883 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 11:32:52.000279 1533911 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-542883"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 11:32:52.000342 1533911 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 11:32:52.010471 1533911 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 11:32:52.010549 1533911 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 11:32:52.020129 1533911 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1028 11:32:52.039264 1533911 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 11:32:52.058518 1533911 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1028 11:32:52.078039 1533911 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1028 11:32:52.081556 1533911 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:32:52.093155 1533911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:32:52.194918 1533911 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:32:52.211459 1533911 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883 for IP: 192.168.85.2
	I1028 11:32:52.211531 1533911 certs.go:194] generating shared ca certs ...
	I1028 11:32:52.211562 1533911 certs.go:226] acquiring lock for ca certs: {Name:mk0d3ceca6221298cea760035b38b9c704e7b693 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:32:52.211776 1533911 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-1313708/.minikube/ca.key
	I1028 11:32:52.211849 1533911 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-1313708/.minikube/proxy-client-ca.key
	I1028 11:32:52.211871 1533911 certs.go:256] generating profile certs ...
	I1028 11:32:52.211964 1533911 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/client.key
	I1028 11:32:52.212000 1533911 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/client.crt with IP's: []
	I1028 11:32:52.482100 1533911 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/client.crt ...
	I1028 11:32:52.482134 1533911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/client.crt: {Name:mkc2100167cd18b06b84ef0e3a475a22f1be0b25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:32:52.482342 1533911 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/client.key ...
	I1028 11:32:52.482358 1533911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/client.key: {Name:mkb2837f1d77020aff5cdda4d8ea3d30bc7fb871 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:32:52.483040 1533911 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/apiserver.key.6c26fc4d
	I1028 11:32:52.483097 1533911 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/apiserver.crt.6c26fc4d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1028 11:32:53.160851 1533911 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/apiserver.crt.6c26fc4d ...
	I1028 11:32:53.160886 1533911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/apiserver.crt.6c26fc4d: {Name:mk0715493b6d379c08fd8c18774148895c639a0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:32:53.161555 1533911 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/apiserver.key.6c26fc4d ...
	I1028 11:32:53.161577 1533911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/apiserver.key.6c26fc4d: {Name:mke7826bac1f5ff37de405e1ec9c1b4350078356 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:32:53.161712 1533911 certs.go:381] copying /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/apiserver.crt.6c26fc4d -> /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/apiserver.crt
	I1028 11:32:53.161841 1533911 certs.go:385] copying /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/apiserver.key.6c26fc4d -> /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/apiserver.key
	I1028 11:32:53.161931 1533911 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/proxy-client.key
	I1028 11:32:53.161968 1533911 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/proxy-client.crt with IP's: []
	I1028 11:32:53.452656 1533911 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/proxy-client.crt ...
	I1028 11:32:53.452687 1533911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/proxy-client.crt: {Name:mkae14fb0ab70a2d610d7f9bd3223f3e822792ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:32:53.452921 1533911 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/proxy-client.key ...
	I1028 11:32:53.452939 1533911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/proxy-client.key: {Name:mk597dc519e482e4de044bfd06cfa6289329f33c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:32:53.453819 1533911 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/1319098.pem (1338 bytes)
	W1028 11:32:53.453866 1533911 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/1319098_empty.pem, impossibly tiny 0 bytes
	I1028 11:32:53.453883 1533911 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 11:32:53.453908 1533911 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/ca.pem (1078 bytes)
	I1028 11:32:53.453934 1533911 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/cert.pem (1123 bytes)
	I1028 11:32:53.453961 1533911 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/key.pem (1675 bytes)
	I1028 11:32:53.454012 1533911 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-1313708/.minikube/files/etc/ssl/certs/13190982.pem (1708 bytes)
	I1028 11:32:53.454625 1533911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 11:32:53.480099 1533911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 11:32:53.505222 1533911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 11:32:53.529431 1533911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 11:32:53.557767 1533911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1028 11:32:53.582376 1533911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 11:32:53.606975 1533911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 11:32:53.633689 1533911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 11:32:53.677702 1533911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/1319098.pem --> /usr/share/ca-certificates/1319098.pem (1338 bytes)
	I1028 11:32:53.705568 1533911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/files/etc/ssl/certs/13190982.pem --> /usr/share/ca-certificates/13190982.pem (1708 bytes)
	I1028 11:32:53.735856 1533911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 11:32:53.764931 1533911 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 11:32:53.785339 1533911 ssh_runner.go:195] Run: openssl version
	I1028 11:32:53.792240 1533911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1319098.pem && ln -fs /usr/share/ca-certificates/1319098.pem /etc/ssl/certs/1319098.pem"
	I1028 11:32:53.804696 1533911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1319098.pem
	I1028 11:32:53.809144 1533911 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 10:48 /usr/share/ca-certificates/1319098.pem
	I1028 11:32:53.809212 1533911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1319098.pem
	I1028 11:32:53.817244 1533911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1319098.pem /etc/ssl/certs/51391683.0"
	I1028 11:32:53.828886 1533911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13190982.pem && ln -fs /usr/share/ca-certificates/13190982.pem /etc/ssl/certs/13190982.pem"
	I1028 11:32:53.839915 1533911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13190982.pem
	I1028 11:32:53.844260 1533911 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 10:48 /usr/share/ca-certificates/13190982.pem
	I1028 11:32:53.844323 1533911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13190982.pem
	I1028 11:32:53.852009 1533911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13190982.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 11:32:53.874601 1533911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 11:32:53.891728 1533911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:32:53.907980 1533911 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:41 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:32:53.908053 1533911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:32:53.930015 1533911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 11:32:53.953413 1533911 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:32:53.967514 1533911 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 11:32:53.967569 1533911 kubeadm.go:392] StartCluster: {Name:embed-certs-542883 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-542883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:32:53.967738 1533911 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1028 11:32:53.967797 1533911 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 11:32:54.037681 1533911 cri.go:89] found id: ""
	I1028 11:32:54.037751 1533911 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 11:32:54.048590 1533911 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 11:32:54.057684 1533911 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1028 11:32:54.057759 1533911 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 11:32:54.067182 1533911 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 11:32:54.067200 1533911 kubeadm.go:157] found existing configuration files:
	
	I1028 11:32:54.067260 1533911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 11:32:54.076439 1533911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 11:32:54.076524 1533911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 11:32:54.085394 1533911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 11:32:54.094624 1533911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 11:32:54.094695 1533911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 11:32:54.103513 1533911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 11:32:54.112477 1533911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 11:32:54.112543 1533911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 11:32:54.121065 1533911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 11:32:54.130065 1533911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 11:32:54.130129 1533911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 11:32:54.138542 1533911 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1028 11:32:54.182294 1533911 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 11:32:54.182373 1533911 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 11:32:54.210387 1533911 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1028 11:32:54.210481 1533911 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1071-aws
	I1028 11:32:54.210536 1533911 kubeadm.go:310] OS: Linux
	I1028 11:32:54.210598 1533911 kubeadm.go:310] CGROUPS_CPU: enabled
	I1028 11:32:54.210665 1533911 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1028 11:32:54.210729 1533911 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1028 11:32:54.210792 1533911 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1028 11:32:54.210858 1533911 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1028 11:32:54.210923 1533911 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1028 11:32:54.210985 1533911 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1028 11:32:54.211048 1533911 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1028 11:32:54.211109 1533911 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1028 11:32:54.275030 1533911 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 11:32:54.275191 1533911 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 11:32:54.275327 1533911 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 11:32:54.281392 1533911 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 11:32:57.318433 1522650 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1028 11:32:57.330588 1522650 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1028 11:32:57.332171 1522650 out.go:201] 
	W1028 11:32:57.333624 1522650 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W1028 11:32:57.333844 1522650 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W1028 11:32:57.333985 1522650 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W1028 11:32:57.334048 1522650 out.go:270] * 
	W1028 11:32:57.335332 1522650 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 11:32:57.337475 1522650 out.go:201] 
	I1028 11:32:54.283975 1533911 out.go:235]   - Generating certificates and keys ...
	I1028 11:32:54.284086 1533911 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 11:32:54.284154 1533911 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 11:32:55.599535 1533911 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 11:32:56.197181 1533911 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1028 11:32:56.714502 1533911 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	be9f8802d8916       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   5345e4b44a194       dashboard-metrics-scraper-8d5bb5db8-8ft4v
	af354fdce961d       ba04bb24b9575       5 minutes ago       Running             storage-provisioner         2                   348661137a892       storage-provisioner
	9666309986efc       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   814b17aa028a6       kubernetes-dashboard-cd95d586-v2szp
	a4e428255f3fd       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   e55de8161dcff       busybox
	b864ea5367f07       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   9cc76b05b8d1c       coredns-74ff55c5b-wlp24
	42478c583a7df       0bcd66b03df5f       5 minutes ago       Running             kindnet-cni                 1                   38356644fc1c8       kindnet-njzd8
	c0ed41137fbff       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   9ac5a70069144       kube-proxy-sdcls
	e4aa22206b37d       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         1                   348661137a892       storage-provisioner
	056d20453e357       1df8a2b116bd1       5 minutes ago       Running             kube-controller-manager     1                   c36c0bc4cda7d       kube-controller-manager-old-k8s-version-674802
	31281b2de0e80       e7605f88f17d6       5 minutes ago       Running             kube-scheduler              1                   0f570043b6027       kube-scheduler-old-k8s-version-674802
	c02d779e69c4a       2c08bbbc02d3a       5 minutes ago       Running             kube-apiserver              1                   be3ff56e02e17       kube-apiserver-old-k8s-version-674802
	6208543cc8b3c       05b738aa1bc63       5 minutes ago       Running             etcd                        1                   bd060b6e7fd72       etcd-old-k8s-version-674802
	4ca77cb193da9       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   ffc76deef9cf5       busybox
	2a9df06520f73       db91994f4ee8f       7 minutes ago       Exited              coredns                     0                   9aeed8465d616       coredns-74ff55c5b-wlp24
	120e0085c59b7       0bcd66b03df5f       7 minutes ago       Exited              kindnet-cni                 0                   701b8b812804d       kindnet-njzd8
	8d4b3dad3dd90       25a5233254979       7 minutes ago       Exited              kube-proxy                  0                   8f0e39e75e045       kube-proxy-sdcls
	4937ca78533bb       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   dbec23a98a1c6       kube-controller-manager-old-k8s-version-674802
	ba54ab63823c2       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   cb8fcd27e6db8       kube-apiserver-old-k8s-version-674802
	01a108b46e6f4       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   a7ae75ef9e42a       etcd-old-k8s-version-674802
	857580d96023b       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   81b2f8da04b4f       kube-scheduler-old-k8s-version-674802
	
	
	==> containerd <==
	Oct 28 11:29:13 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:29:13.425316785Z" level=info msg="CreateContainer within sandbox \"5345e4b44a19483b63b20f1608ff31e77b765e63476661d06091c7f5730ef7db\" for name:\"dashboard-metrics-scraper\"  attempt:4 returns container id \"7631bc5614df611ecf9e62992d7401e688390e5ca64417c5655b6c707c841299\""
	Oct 28 11:29:13 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:29:13.425970470Z" level=info msg="StartContainer for \"7631bc5614df611ecf9e62992d7401e688390e5ca64417c5655b6c707c841299\""
	Oct 28 11:29:13 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:29:13.501874834Z" level=info msg="StartContainer for \"7631bc5614df611ecf9e62992d7401e688390e5ca64417c5655b6c707c841299\" returns successfully"
	Oct 28 11:29:13 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:29:13.528713123Z" level=info msg="shim disconnected" id=7631bc5614df611ecf9e62992d7401e688390e5ca64417c5655b6c707c841299 namespace=k8s.io
	Oct 28 11:29:13 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:29:13.528772856Z" level=warning msg="cleaning up after shim disconnected" id=7631bc5614df611ecf9e62992d7401e688390e5ca64417c5655b6c707c841299 namespace=k8s.io
	Oct 28 11:29:13 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:29:13.528785295Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 28 11:29:14 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:29:14.080153419Z" level=info msg="RemoveContainer for \"309db5e97f5f2a99eeabfc6729f149f2248af8f014e3018a65087b5b753d7dfd\""
	Oct 28 11:29:14 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:29:14.086338887Z" level=info msg="RemoveContainer for \"309db5e97f5f2a99eeabfc6729f149f2248af8f014e3018a65087b5b753d7dfd\" returns successfully"
	Oct 28 11:29:57 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:29:57.417150504Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 28 11:29:57 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:29:57.428820503Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Oct 28 11:29:57 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:29:57.430735487Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Oct 28 11:29:57 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:29:57.430827950Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Oct 28 11:30:34 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:30:34.400193598Z" level=info msg="CreateContainer within sandbox \"5345e4b44a19483b63b20f1608ff31e77b765e63476661d06091c7f5730ef7db\" for container name:\"dashboard-metrics-scraper\"  attempt:5"
	Oct 28 11:30:34 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:30:34.420739620Z" level=info msg="CreateContainer within sandbox \"5345e4b44a19483b63b20f1608ff31e77b765e63476661d06091c7f5730ef7db\" for name:\"dashboard-metrics-scraper\"  attempt:5 returns container id \"be9f8802d8916ebf81427a08c9b5699aeaa6098d28b576618aa5899e3f59e09b\""
	Oct 28 11:30:34 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:30:34.421304239Z" level=info msg="StartContainer for \"be9f8802d8916ebf81427a08c9b5699aeaa6098d28b576618aa5899e3f59e09b\""
	Oct 28 11:30:34 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:30:34.493952705Z" level=info msg="StartContainer for \"be9f8802d8916ebf81427a08c9b5699aeaa6098d28b576618aa5899e3f59e09b\" returns successfully"
	Oct 28 11:30:34 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:30:34.519533658Z" level=info msg="shim disconnected" id=be9f8802d8916ebf81427a08c9b5699aeaa6098d28b576618aa5899e3f59e09b namespace=k8s.io
	Oct 28 11:30:34 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:30:34.519605420Z" level=warning msg="cleaning up after shim disconnected" id=be9f8802d8916ebf81427a08c9b5699aeaa6098d28b576618aa5899e3f59e09b namespace=k8s.io
	Oct 28 11:30:34 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:30:34.519617129Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 28 11:30:35 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:30:35.286855726Z" level=info msg="RemoveContainer for \"7631bc5614df611ecf9e62992d7401e688390e5ca64417c5655b6c707c841299\""
	Oct 28 11:30:35 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:30:35.294109410Z" level=info msg="RemoveContainer for \"7631bc5614df611ecf9e62992d7401e688390e5ca64417c5655b6c707c841299\" returns successfully"
	Oct 28 11:32:39 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:32:39.398293323Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 28 11:32:39 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:32:39.416163075Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Oct 28 11:32:39 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:32:39.418036771Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Oct 28 11:32:39 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:32:39.418255979Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [2a9df06520f732f1766508da84b61f745cb047b5f7bcf5bf3ef9cb3891f6239f] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:54691 - 59512 "HINFO IN 701964382770757036.1622313577083864860. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.016523145s
	
	
	==> coredns [b864ea5367f07235e01b7c4c4545bda20ba5924d99b8e542c0315227a77c2c59] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:37801 - 13344 "HINFO IN 981161145988552116.1207215888426285145. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.036677021s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-674802
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-674802
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=old-k8s-version-674802
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T11_24_43_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:24:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-674802
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:32:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 11:28:04 +0000   Mon, 28 Oct 2024 11:24:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 11:28:04 +0000   Mon, 28 Oct 2024 11:24:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 11:28:04 +0000   Mon, 28 Oct 2024 11:24:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 11:28:04 +0000   Mon, 28 Oct 2024 11:24:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-674802
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 647d9b2317cf4e92bba105056215e984
	  System UUID:                20b2357e-356d-46a8-b586-d57348d369c5
	  Boot ID:                    7206fba0-79a5-434d-956e-eb6133d7b735
	  Kernel Version:             5.15.0-1071-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m40s
	  kube-system                 coredns-74ff55c5b-wlp24                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m1s
	  kube-system                 etcd-old-k8s-version-674802                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m8s
	  kube-system                 kindnet-njzd8                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m1s
	  kube-system                 kube-apiserver-old-k8s-version-674802             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m8s
	  kube-system                 kube-controller-manager-old-k8s-version-674802    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m8s
	  kube-system                 kube-proxy-sdcls                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m1s
	  kube-system                 kube-scheduler-old-k8s-version-674802             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m8s
	  kube-system                 metrics-server-9975d5f86-lv8qx                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m29s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-8ft4v         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m29s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-v2szp               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m27s (x5 over 8m28s)  kubelet     Node old-k8s-version-674802 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m27s (x4 over 8m28s)  kubelet     Node old-k8s-version-674802 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m27s (x4 over 8m28s)  kubelet     Node old-k8s-version-674802 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m9s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m8s                   kubelet     Node old-k8s-version-674802 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m8s                   kubelet     Node old-k8s-version-674802 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m8s                   kubelet     Node old-k8s-version-674802 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m8s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m1s                   kubelet     Node old-k8s-version-674802 status is now: NodeReady
	  Normal  Starting                 8m                     kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m                     kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m (x8 over 6m)        kubelet     Node old-k8s-version-674802 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m (x8 over 6m)        kubelet     Node old-k8s-version-674802 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m (x7 over 6m)        kubelet     Node old-k8s-version-674802 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m                     kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m45s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Oct28 09:59] systemd-journald[220]: Failed to send stream file descriptor to service manager: Connection refused
	[Oct28 10:02] overlayfs: '/var/lib/containers/storage/overlay/l/ZLTOCNGE2IGM6DT7VP2QP7OV3M' not a directory
	[  +0.673094] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [01a108b46e6f4f9217c1f90a9611bdbc7956ad16edbfd8093ad46cc6ef34b232] <==
	2024-10-28 11:24:32.750749 I | etcdserver/membership: added member ea7e25599daad906 [https://192.168.76.2:2380] to cluster 6f20f2c4b2fb5f8a
	2024-10-28 11:24:32.750796 I | embed: listening for peers on 192.168.76.2:2380
	raft2024/10/28 11:24:33 INFO: ea7e25599daad906 is starting a new election at term 1
	raft2024/10/28 11:24:33 INFO: ea7e25599daad906 became candidate at term 2
	raft2024/10/28 11:24:33 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	raft2024/10/28 11:24:33 INFO: ea7e25599daad906 became leader at term 2
	raft2024/10/28 11:24:33 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2024-10-28 11:24:33.415771 I | etcdserver: setting up the initial cluster version to 3.4
	2024-10-28 11:24:33.416061 I | etcdserver: published {Name:old-k8s-version-674802 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2024-10-28 11:24:33.416199 I | embed: ready to serve client requests
	2024-10-28 11:24:33.420512 I | embed: serving client requests on 192.168.76.2:2379
	2024-10-28 11:24:33.425385 I | embed: ready to serve client requests
	2024-10-28 11:24:33.433780 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-10-28 11:24:33.435652 I | etcdserver/api: enabled capabilities for version 3.4
	2024-10-28 11:24:33.435921 I | embed: serving client requests on 127.0.0.1:2379
	2024-10-28 11:24:59.142145 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 11:25:03.607852 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 11:25:13.607862 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 11:25:23.607852 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 11:25:33.607968 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 11:25:43.607887 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 11:25:53.607917 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 11:26:03.607848 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 11:26:13.609723 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 11:26:23.608181 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [6208543cc8b3c7edcccd800e0f9d98e845390bf870426de3226d81781dce3148] <==
	2024-10-28 11:28:49.621294 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 11:28:59.621366 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 11:29:09.621202 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 11:29:19.621350 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 11:29:29.621333 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 11:29:39.621293 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 11:29:49.621215 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 11:29:59.621398 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 11:30:09.621298 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 11:30:19.621454 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 11:30:29.621382 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 11:30:39.621283 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 11:30:49.621351 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 11:30:59.621468 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 11:31:09.621231 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 11:31:19.621334 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 11:31:29.621347 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 11:31:39.621198 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 11:31:49.621345 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 11:31:59.621832 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 11:32:09.621268 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 11:32:19.621480 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 11:32:29.621312 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 11:32:39.621760 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 11:32:49.624027 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 11:32:59 up 1 day, 17:15,  0 users,  load average: 1.39, 1.66, 2.29
	Linux old-k8s-version-674802 5.15.0-1071-aws #77~20.04.1-Ubuntu SMP Thu Oct 3 19:34:36 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [120e0085c59b7ce7fd3c7afbb14ea7637d4c18b660f3d35631be06f9007e3a33] <==
	I1028 11:25:01.828559       1 main.go:148] setting mtu 1500 for CNI 
	I1028 11:25:01.828572       1 main.go:178] kindnetd IP family: "ipv4"
	I1028 11:25:01.828585       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I1028 11:25:02.130829       1 controller.go:338] Starting controller kube-network-policies
	I1028 11:25:02.131003       1 controller.go:342] Waiting for informer caches to sync
	I1028 11:25:02.131017       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I1028 11:25:02.331761       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I1028 11:25:02.331784       1 metrics.go:61] Registering metrics
	I1028 11:25:02.331940       1 controller.go:378] Syncing nftables rules
	I1028 11:25:12.137929       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1028 11:25:12.138186       1 main.go:300] handling current node
	I1028 11:25:22.129556       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1028 11:25:22.129658       1 main.go:300] handling current node
	I1028 11:25:32.135605       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1028 11:25:32.135664       1 main.go:300] handling current node
	I1028 11:25:42.137228       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1028 11:25:42.137502       1 main.go:300] handling current node
	I1028 11:25:52.129926       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1028 11:25:52.129960       1 main.go:300] handling current node
	I1028 11:26:02.130033       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1028 11:26:02.130073       1 main.go:300] handling current node
	I1028 11:26:12.133426       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1028 11:26:12.133461       1 main.go:300] handling current node
	I1028 11:26:22.131696       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1028 11:26:22.131732       1 main.go:300] handling current node
	
	
	==> kindnet [42478c583a7df5e62ae3718bc78fb6be211abb490f9190466870859ec29e3bf3] <==
	I1028 11:30:55.437974       1 main.go:300] handling current node
	I1028 11:31:05.435979       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1028 11:31:05.436015       1 main.go:300] handling current node
	I1028 11:31:15.428756       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1028 11:31:15.428861       1 main.go:300] handling current node
	I1028 11:31:25.435910       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1028 11:31:25.435949       1 main.go:300] handling current node
	I1028 11:31:35.437103       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1028 11:31:35.437153       1 main.go:300] handling current node
	I1028 11:31:45.430845       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1028 11:31:45.430879       1 main.go:300] handling current node
	I1028 11:31:55.435716       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1028 11:31:55.435752       1 main.go:300] handling current node
	I1028 11:32:05.429615       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1028 11:32:05.429647       1 main.go:300] handling current node
	I1028 11:32:15.429104       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1028 11:32:15.429145       1 main.go:300] handling current node
	I1028 11:32:25.435442       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1028 11:32:25.435482       1 main.go:300] handling current node
	I1028 11:32:35.436794       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1028 11:32:35.436829       1 main.go:300] handling current node
	I1028 11:32:45.436724       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1028 11:32:45.436758       1 main.go:300] handling current node
	I1028 11:32:55.431692       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1028 11:32:55.431728       1 main.go:300] handling current node
	
	
	==> kube-apiserver [ba54ab63823c2fcfe3e9bc95fca852e480e0d8fae4071a23e1fc38d3e74384cc] <==
	I1028 11:24:40.365291       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1028 11:24:40.365416       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1028 11:24:40.389107       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I1028 11:24:40.400225       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I1028 11:24:40.400395       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I1028 11:24:40.836649       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1028 11:24:40.891712       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1028 11:24:40.949077       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1028 11:24:40.950426       1 controller.go:606] quota admission added evaluator for: endpoints
	I1028 11:24:40.954844       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1028 11:24:42.002434       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I1028 11:24:42.480303       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I1028 11:24:42.561454       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I1028 11:24:50.915824       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I1028 11:24:58.389577       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I1028 11:24:58.724133       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I1028 11:25:04.776086       1 client.go:360] parsed scheme: "passthrough"
	I1028 11:25:04.776130       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1028 11:25:04.776139       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1028 11:25:38.192691       1 client.go:360] parsed scheme: "passthrough"
	I1028 11:25:38.192733       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1028 11:25:38.192742       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1028 11:26:16.152283       1 client.go:360] parsed scheme: "passthrough"
	I1028 11:26:16.152346       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1028 11:26:16.152355       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [c02d779e69c4a6181f499ea147b62985bdd68ffb9d61fe7dab43115ca4318de6] <==
	I1028 11:29:58.993279       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1028 11:29:58.993288       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W1028 11:30:14.511668       1 handler_proxy.go:102] no RequestInfo found in the context
	E1028 11:30:14.511767       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1028 11:30:14.511783       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 11:30:31.959866       1 client.go:360] parsed scheme: "passthrough"
	I1028 11:30:31.959909       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1028 11:30:31.959919       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1028 11:31:04.256336       1 client.go:360] parsed scheme: "passthrough"
	I1028 11:31:04.256378       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1028 11:31:04.256387       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1028 11:31:35.753953       1 client.go:360] parsed scheme: "passthrough"
	I1028 11:31:35.753996       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1028 11:31:35.754030       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1028 11:32:05.876255       1 client.go:360] parsed scheme: "passthrough"
	I1028 11:32:05.876304       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1028 11:32:05.876313       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W1028 11:32:12.752803       1 handler_proxy.go:102] no RequestInfo found in the context
	E1028 11:32:12.752996       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1028 11:32:12.753018       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 11:32:47.220657       1 client.go:360] parsed scheme: "passthrough"
	I1028 11:32:47.220715       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1028 11:32:47.220725       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [056d20453e357e86aa3e62b0dd7d945c40ec05cc5f462727941eac3714718438] <==
	W1028 11:28:38.012362       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1028 11:29:01.947463       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1028 11:29:09.662751       1 request.go:655] Throttling request took 1.048494344s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W1028 11:29:10.514128       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1028 11:29:32.449331       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1028 11:29:42.164901       1 request.go:655] Throttling request took 1.048392637s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1?timeout=32s
	W1028 11:29:43.016425       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1028 11:30:02.952240       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1028 11:30:14.666885       1 request.go:655] Throttling request took 1.048369691s, request: GET:https://192.168.76.2:8443/apis/storage.k8s.io/v1beta1?timeout=32s
	W1028 11:30:15.518322       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1028 11:30:33.454758       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1028 11:30:47.168889       1 request.go:655] Throttling request took 1.048192702s, request: GET:https://192.168.76.2:8443/apis/policy/v1beta1?timeout=32s
	W1028 11:30:48.020737       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1028 11:31:03.956679       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1028 11:31:19.671181       1 request.go:655] Throttling request took 1.048408313s, request: GET:https://192.168.76.2:8443/apis/rbac.authorization.k8s.io/v1?timeout=32s
	W1028 11:31:20.522538       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1028 11:31:34.458598       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1028 11:31:52.172936       1 request.go:655] Throttling request took 1.048338861s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W1028 11:31:53.024452       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1028 11:32:04.960867       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1028 11:32:24.674832       1 request.go:655] Throttling request took 1.045105943s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W1028 11:32:25.526122       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1028 11:32:35.462934       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1028 11:32:57.176513       1 request.go:655] Throttling request took 1.04820022s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W1028 11:32:58.029375       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-controller-manager [4937ca78533bbe1e9024be3e8c38035f4fb621e9cfcd8ef6fc974857b5f788d7] <==
	I1028 11:24:58.537566       1 shared_informer.go:247] Caches are synced for taint 
	I1028 11:24:58.537665       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
	I1028 11:24:58.537734       1 taint_manager.go:187] Starting NoExecuteTaintManager
	W1028 11:24:58.537815       1 node_lifecycle_controller.go:1044] Missing timestamp for Node old-k8s-version-674802. Assuming now as a timestamp.
	I1028 11:24:58.537947       1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I1028 11:24:58.538971       1 event.go:291] "Event occurred" object="old-k8s-version-674802" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-674802 event: Registered Node old-k8s-version-674802 in Controller"
	I1028 11:24:58.572667       1 range_allocator.go:373] Set node old-k8s-version-674802 PodCIDR to [10.244.0.0/24]
	I1028 11:24:58.577986       1 shared_informer.go:247] Caches are synced for resource quota 
	I1028 11:24:58.581116       1 shared_informer.go:247] Caches are synced for stateful set 
	I1028 11:24:58.585263       1 shared_informer.go:247] Caches are synced for resource quota 
	I1028 11:24:58.630000       1 shared_informer.go:247] Caches are synced for daemon sets 
	I1028 11:24:58.765904       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-njzd8"
	I1028 11:24:58.765945       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-sdcls"
	I1028 11:24:58.818726       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	E1028 11:24:58.916440       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"dfa45256-7428-4f74-ade3-ef655454ad7c", ResourceVersion:"256", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63865711482, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x400042ea80), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x400042eaa0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0x400042eae0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4000de7800), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400042e
b80), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400042ebc0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x400042ec80)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4000768ea0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000d4fa28), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40000f5730), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4000767630)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000d4fac8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	E1028 11:24:58.921942       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"f2cb06fd-e71b-4014-8ed7-73d65dab8e3b", ResourceVersion:"413", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63865711483, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20241007-36f62932\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\
":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x400194c020), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x400194c040)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x400194c060), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x400194c080)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x400194c0a0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generatio
n:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400194c0c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:
(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400194c0e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlo
ckStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CS
I:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400194c100), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Q
uobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20241007-36f62932", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x400194c120)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x400194c160)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i
:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", Sub
Path:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40011b6540), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x40020fa3b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40004b6070), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinit
y:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4001d14000)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40020fa400)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v
1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	E1028 11:24:58.936824       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"dfa45256-7428-4f74-ade3-ef655454ad7c", ResourceVersion:"414", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63865711482, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x400194c1e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x400194c200)}, v1.ManagedFieldsEntry{Manager:"kube-co
ntroller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x400194c220), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x400194c240)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x400194c260), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElastic
BlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4002013b80), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSour
ce)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400194c280), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSo
urce)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400194c2a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil),
Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil),
WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x400194c2e0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"F
ile", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40011b6a80), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x40020fa5b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40004b60e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)
(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4001d14008)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40020fa608)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest ve
rsion and try again
	I1028 11:24:59.019027       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1028 11:24:59.023585       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1028 11:24:59.023615       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1028 11:24:59.864032       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I1028 11:24:59.905691       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-8hwrv"
	I1028 11:25:03.538182       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1028 11:26:28.837836       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	E1028 11:26:29.032226       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-proxy [8d4b3dad3dd90f3ec833f354de4e8225bdaf07199d5245c988b1fdbc527c1015] <==
	I1028 11:24:59.829233       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I1028 11:24:59.829534       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W1028 11:24:59.919895       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I1028 11:24:59.920116       1 server_others.go:185] Using iptables Proxier.
	I1028 11:24:59.921460       1 server.go:650] Version: v1.20.0
	I1028 11:24:59.923318       1 config.go:315] Starting service config controller
	I1028 11:24:59.923340       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1028 11:24:59.923421       1 config.go:224] Starting endpoint slice config controller
	I1028 11:24:59.923434       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1028 11:25:00.023488       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I1028 11:25:00.023806       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-proxy [c0ed41137fbff35ffcb34df99174bf1cb9e6e2fda2154d421ab797a438e507bf] <==
	I1028 11:27:14.626597       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I1028 11:27:14.626741       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W1028 11:27:14.653529       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I1028 11:27:14.653615       1 server_others.go:185] Using iptables Proxier.
	I1028 11:27:14.653823       1 server.go:650] Version: v1.20.0
	I1028 11:27:14.654747       1 config.go:315] Starting service config controller
	I1028 11:27:14.654765       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1028 11:27:14.654796       1 config.go:224] Starting endpoint slice config controller
	I1028 11:27:14.654799       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1028 11:27:14.754902       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I1028 11:27:14.754963       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [31281b2de0e80c98175b18b80c8ece18d25bb88841661719ca7805a5cb795824] <==
	I1028 11:27:07.754971       1 serving.go:331] Generated self-signed cert in-memory
	W1028 11:27:11.674766       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1028 11:27:11.674812       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1028 11:27:11.674852       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1028 11:27:11.674859       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1028 11:27:11.775536       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I1028 11:27:11.775661       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1028 11:27:11.775669       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1028 11:27:11.775690       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1028 11:27:11.924355       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1028 11:27:11.924667       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1028 11:27:11.924866       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1028 11:27:11.925052       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1028 11:27:11.925228       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1028 11:27:11.925393       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1028 11:27:11.925566       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1028 11:27:11.925866       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1028 11:27:11.925999       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1028 11:27:11.926113       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1028 11:27:11.926184       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1028 11:27:11.976291       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [857580d96023ba113555b54f38493703fce44c36a25523c35c1fd07c51eee056] <==
	I1028 11:24:35.070503       1 serving.go:331] Generated self-signed cert in-memory
	W1028 11:24:39.542278       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1028 11:24:39.542504       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1028 11:24:39.542662       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1028 11:24:39.542755       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1028 11:24:39.588029       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I1028 11:24:39.588146       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1028 11:24:39.588160       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1028 11:24:39.588184       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1028 11:24:39.599682       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1028 11:24:39.600114       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1028 11:24:39.600433       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1028 11:24:39.600696       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1028 11:24:39.600806       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1028 11:24:39.600884       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1028 11:24:39.600944       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1028 11:24:39.601000       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1028 11:24:39.601061       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1028 11:24:39.601123       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1028 11:24:39.601340       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1028 11:24:39.601427       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1028 11:24:40.487956       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1028 11:24:40.519994       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1028 11:24:40.524227       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1028 11:24:42.288418       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Oct 28 11:31:30 old-k8s-version-674802 kubelet[662]: E1028 11:31:30.396768     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	Oct 28 11:31:32 old-k8s-version-674802 kubelet[662]: E1028 11:31:32.397834     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 28 11:31:42 old-k8s-version-674802 kubelet[662]: I1028 11:31:42.396459     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: be9f8802d8916ebf81427a08c9b5699aeaa6098d28b576618aa5899e3f59e09b
	Oct 28 11:31:42 old-k8s-version-674802 kubelet[662]: E1028 11:31:42.396795     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	Oct 28 11:31:47 old-k8s-version-674802 kubelet[662]: E1028 11:31:47.398090     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 28 11:31:56 old-k8s-version-674802 kubelet[662]: I1028 11:31:56.396437     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: be9f8802d8916ebf81427a08c9b5699aeaa6098d28b576618aa5899e3f59e09b
	Oct 28 11:31:56 old-k8s-version-674802 kubelet[662]: E1028 11:31:56.396772     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	Oct 28 11:31:58 old-k8s-version-674802 kubelet[662]: E1028 11:31:58.397362     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 28 11:32:07 old-k8s-version-674802 kubelet[662]: I1028 11:32:07.399489     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: be9f8802d8916ebf81427a08c9b5699aeaa6098d28b576618aa5899e3f59e09b
	Oct 28 11:32:07 old-k8s-version-674802 kubelet[662]: E1028 11:32:07.399844     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	Oct 28 11:32:13 old-k8s-version-674802 kubelet[662]: E1028 11:32:13.397227     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 28 11:32:19 old-k8s-version-674802 kubelet[662]: I1028 11:32:19.397163     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: be9f8802d8916ebf81427a08c9b5699aeaa6098d28b576618aa5899e3f59e09b
	Oct 28 11:32:19 old-k8s-version-674802 kubelet[662]: E1028 11:32:19.398441     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	Oct 28 11:32:28 old-k8s-version-674802 kubelet[662]: E1028 11:32:28.398320     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 28 11:32:34 old-k8s-version-674802 kubelet[662]: I1028 11:32:34.396501     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: be9f8802d8916ebf81427a08c9b5699aeaa6098d28b576618aa5899e3f59e09b
	Oct 28 11:32:34 old-k8s-version-674802 kubelet[662]: E1028 11:32:34.396869     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	Oct 28 11:32:39 old-k8s-version-674802 kubelet[662]: E1028 11:32:39.418625     662 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Oct 28 11:32:39 old-k8s-version-674802 kubelet[662]: E1028 11:32:39.419047     662 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Oct 28 11:32:39 old-k8s-version-674802 kubelet[662]: E1028 11:32:39.419679     662 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-bnmqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-lv8qx_kube-system(0813322
0-8dbe-4283-a64b-8a9383b25c93): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Oct 28 11:32:39 old-k8s-version-674802 kubelet[662]: E1028 11:32:39.419884     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Oct 28 11:32:47 old-k8s-version-674802 kubelet[662]: I1028 11:32:47.406280     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: be9f8802d8916ebf81427a08c9b5699aeaa6098d28b576618aa5899e3f59e09b
	Oct 28 11:32:47 old-k8s-version-674802 kubelet[662]: E1028 11:32:47.407313     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	Oct 28 11:32:51 old-k8s-version-674802 kubelet[662]: E1028 11:32:51.416287     662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 28 11:32:58 old-k8s-version-674802 kubelet[662]: I1028 11:32:58.396479     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: be9f8802d8916ebf81427a08c9b5699aeaa6098d28b576618aa5899e3f59e09b
	Oct 28 11:32:58 old-k8s-version-674802 kubelet[662]: E1028 11:32:58.396862     662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
	
	
	==> kubernetes-dashboard [9666309986efcb4076982c7df1d9e0c9f905cbcee4a0e3d7a1dcd2ab0132348b] <==
	2024/10/28 11:27:39 Using namespace: kubernetes-dashboard
	2024/10/28 11:27:39 Using in-cluster config to connect to apiserver
	2024/10/28 11:27:39 Using secret token for csrf signing
	2024/10/28 11:27:39 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/10/28 11:27:39 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/10/28 11:27:39 Successful initial request to the apiserver, version: v1.20.0
	2024/10/28 11:27:39 Generating JWE encryption key
	2024/10/28 11:27:39 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/10/28 11:27:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/10/28 11:27:40 Initializing JWE encryption key from synchronized object
	2024/10/28 11:27:40 Creating in-cluster Sidecar client
	2024/10/28 11:27:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/28 11:27:40 Serving insecurely on HTTP port: 9090
	2024/10/28 11:28:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/28 11:28:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/28 11:29:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/28 11:29:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/28 11:30:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/28 11:30:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/28 11:31:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/28 11:31:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/28 11:32:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/28 11:32:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/28 11:27:39 Starting overwatch
	
	
	==> storage-provisioner [af354fdce961d0c931e3e7b6826943560aa505683dbea93dedbce9a94105e0f8] <==
	I1028 11:27:58.535210       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 11:27:58.551888       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 11:27:58.552059       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 11:28:16.046242       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 11:28:16.046648       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-674802_477a9f62-9726-4555-8c7e-2c9d782fd3ee!
	I1028 11:28:16.051884       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6baa8baf-9963-4ef5-aec2-d198238af88a", APIVersion:"v1", ResourceVersion:"827", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-674802_477a9f62-9726-4555-8c7e-2c9d782fd3ee became leader
	I1028 11:28:16.147164       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-674802_477a9f62-9726-4555-8c7e-2c9d782fd3ee!
	
	
	==> storage-provisioner [e4aa22206b37d13d9665658eeeb808da6cd7c1789a0887c07a5f6460c9dd38f5] <==
	I1028 11:27:14.469698       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1028 11:27:44.472048       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-674802 -n old-k8s-version-674802
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-674802 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-lv8qx
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-674802 describe pod metrics-server-9975d5f86-lv8qx
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-674802 describe pod metrics-server-9975d5f86-lv8qx: exit status 1 (124.228069ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-lv8qx" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-674802 describe pod metrics-server-9975d5f86-lv8qx: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (378.69s)

                                                
                                    

Test pass (300/330)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.16
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.2/json-events 4.99
13 TestDownloadOnly/v1.31.2/preload-exists 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.07
18 TestDownloadOnly/v1.31.2/DeleteAll 0.2
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.54
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 213.97
29 TestAddons/serial/Volcano 38.11
31 TestAddons/serial/GCPAuth/Namespaces 0.17
32 TestAddons/serial/GCPAuth/FakeCredentials 10.83
35 TestAddons/parallel/Registry 16.48
36 TestAddons/parallel/Ingress 19.82
37 TestAddons/parallel/InspektorGadget 11.88
38 TestAddons/parallel/MetricsServer 6.83
40 TestAddons/parallel/CSI 58.41
41 TestAddons/parallel/Headlamp 17.05
42 TestAddons/parallel/CloudSpanner 6.58
43 TestAddons/parallel/LocalPath 8.45
44 TestAddons/parallel/NvidiaDevicePlugin 6.51
45 TestAddons/parallel/Yakd 11.77
47 TestAddons/StoppedEnableDisable 12.34
48 TestCertOptions 35.82
49 TestCertExpiration 231.82
51 TestForceSystemdFlag 33.01
52 TestForceSystemdEnv 44.7
53 TestDockerEnvContainerd 45.34
58 TestErrorSpam/setup 28.12
59 TestErrorSpam/start 0.73
60 TestErrorSpam/status 1.07
61 TestErrorSpam/pause 1.81
62 TestErrorSpam/unpause 2.04
63 TestErrorSpam/stop 1.52
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 52.2
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 5.99
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 4
75 TestFunctional/serial/CacheCmd/cache/add_local 1.23
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.01
80 TestFunctional/serial/CacheCmd/cache/delete 0.13
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
83 TestFunctional/serial/ExtraConfig 58.23
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.68
86 TestFunctional/serial/LogsFileCmd 1.7
87 TestFunctional/serial/InvalidService 4.36
89 TestFunctional/parallel/ConfigCmd 0.49
90 TestFunctional/parallel/DashboardCmd 9.57
91 TestFunctional/parallel/DryRun 0.43
92 TestFunctional/parallel/InternationalLanguage 0.2
93 TestFunctional/parallel/StatusCmd 1.04
97 TestFunctional/parallel/ServiceCmdConnect 8.62
98 TestFunctional/parallel/AddonsCmd 0.19
99 TestFunctional/parallel/PersistentVolumeClaim 23.99
101 TestFunctional/parallel/SSHCmd 0.7
102 TestFunctional/parallel/CpCmd 2.35
104 TestFunctional/parallel/FileSync 0.33
105 TestFunctional/parallel/CertSync 2.04
109 TestFunctional/parallel/NodeLabels 0.1
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.57
113 TestFunctional/parallel/License 0.25
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.61
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.43
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/ServiceCmd/DeployApp 8.22
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
127 TestFunctional/parallel/ProfileCmd/profile_list 0.42
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
129 TestFunctional/parallel/MountCmd/any-port 7.95
130 TestFunctional/parallel/ServiceCmd/List 0.55
131 TestFunctional/parallel/ServiceCmd/JSONOutput 0.58
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.42
133 TestFunctional/parallel/ServiceCmd/Format 0.35
134 TestFunctional/parallel/ServiceCmd/URL 0.37
135 TestFunctional/parallel/MountCmd/specific-port 1.92
136 TestFunctional/parallel/MountCmd/VerifyCleanup 2.68
137 TestFunctional/parallel/Version/short 0.07
138 TestFunctional/parallel/Version/components 1.27
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.33
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
143 TestFunctional/parallel/ImageCommands/ImageBuild 3.51
144 TestFunctional/parallel/ImageCommands/Setup 0.63
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.39
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
149 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.38
150 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.42
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.45
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.46
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.7
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.39
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 116.68
162 TestMultiControlPlane/serial/DeployApp 34.55
163 TestMultiControlPlane/serial/PingHostFromPods 1.7
164 TestMultiControlPlane/serial/AddWorkerNode 21.81
165 TestMultiControlPlane/serial/NodeLabels 0.11
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.95
167 TestMultiControlPlane/serial/CopyFile 18.22
168 TestMultiControlPlane/serial/StopSecondaryNode 12.86
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.73
170 TestMultiControlPlane/serial/RestartSecondaryNode 18.05
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.96
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 135.61
173 TestMultiControlPlane/serial/DeleteSecondaryNode 10.67
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.73
175 TestMultiControlPlane/serial/StopCluster 36.07
176 TestMultiControlPlane/serial/RestartCluster 80.02
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.81
178 TestMultiControlPlane/serial/AddSecondaryNode 44.63
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.95
183 TestJSONOutput/start/Command 91.26
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.74
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.65
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 5.8
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.23
208 TestKicCustomNetwork/create_custom_network 42.22
209 TestKicCustomNetwork/use_default_bridge_network 31.61
210 TestKicExistingNetwork 31.93
211 TestKicCustomSubnet 34.11
212 TestKicStaticIP 34.48
213 TestMainNoArgs 0.06
214 TestMinikubeProfile 67.2
217 TestMountStart/serial/StartWithMountFirst 8.75
218 TestMountStart/serial/VerifyMountFirst 0.26
219 TestMountStart/serial/StartWithMountSecond 9.16
220 TestMountStart/serial/VerifyMountSecond 0.26
221 TestMountStart/serial/DeleteFirst 1.61
222 TestMountStart/serial/VerifyMountPostDelete 0.25
223 TestMountStart/serial/Stop 1.2
224 TestMountStart/serial/RestartStopped 7.13
225 TestMountStart/serial/VerifyMountPostStop 0.25
228 TestMultiNode/serial/FreshStart2Nodes 69.32
229 TestMultiNode/serial/DeployApp2Nodes 20.71
230 TestMultiNode/serial/PingHostFrom2Pods 1
231 TestMultiNode/serial/AddNode 19.28
232 TestMultiNode/serial/MultiNodeLabels 0.1
233 TestMultiNode/serial/ProfileList 0.67
234 TestMultiNode/serial/CopyFile 9.74
235 TestMultiNode/serial/StopNode 2.23
236 TestMultiNode/serial/StartAfterStop 9.9
237 TestMultiNode/serial/RestartKeepsNodes 97.31
238 TestMultiNode/serial/DeleteNode 5.51
239 TestMultiNode/serial/StopMultiNode 24
240 TestMultiNode/serial/RestartMultiNode 53.64
241 TestMultiNode/serial/ValidateNameConflict 31.98
246 TestPreload 107.75
248 TestScheduledStopUnix 107.94
251 TestInsufficientStorage 12.83
252 TestRunningBinaryUpgrade 76.91
254 TestKubernetesUpgrade 108.95
255 TestMissingContainerUpgrade 177.37
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
258 TestNoKubernetes/serial/StartWithK8s 39.29
259 TestNoKubernetes/serial/StartWithStopK8s 8.86
260 TestNoKubernetes/serial/Start 9.32
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.25
262 TestNoKubernetes/serial/ProfileList 0.93
263 TestNoKubernetes/serial/Stop 1.22
264 TestNoKubernetes/serial/StartNoArgs 6.38
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
266 TestStoppedBinaryUpgrade/Setup 0.71
267 TestStoppedBinaryUpgrade/Upgrade 133.82
276 TestPause/serial/Start 92.53
277 TestStoppedBinaryUpgrade/MinikubeLogs 1.18
278 TestPause/serial/SecondStartNoReconfiguration 6.37
286 TestNetworkPlugins/group/false 5.1
287 TestPause/serial/Pause 0.88
288 TestPause/serial/VerifyStatus 0.43
289 TestPause/serial/Unpause 0.81
290 TestPause/serial/PauseAgain 1.09
291 TestPause/serial/DeletePaused 2.82
295 TestPause/serial/VerifyDeletedResources 0.17
297 TestStartStop/group/old-k8s-version/serial/FirstStart 142.23
298 TestStartStop/group/old-k8s-version/serial/DeployApp 8.66
299 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.62
300 TestStartStop/group/old-k8s-version/serial/Stop 13.05
302 TestStartStop/group/no-preload/serial/FirstStart 63.8
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.25
305 TestStartStop/group/no-preload/serial/DeployApp 8.42
306 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.33
307 TestStartStop/group/no-preload/serial/Stop 12.06
308 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
309 TestStartStop/group/no-preload/serial/SecondStart 266.81
310 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
311 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.1
312 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
313 TestStartStop/group/no-preload/serial/Pause 3.04
315 TestStartStop/group/embed-certs/serial/FirstStart 94.53
316 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
317 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.13
318 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.28
319 TestStartStop/group/old-k8s-version/serial/Pause 3.48
321 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 51.08
322 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.36
323 TestStartStop/group/embed-certs/serial/DeployApp 9.31
324 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.09
325 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.03
326 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.07
327 TestStartStop/group/embed-certs/serial/Stop 12.05
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 293.04
330 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.28
331 TestStartStop/group/embed-certs/serial/SecondStart 305.13
332 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
333 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
334 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
335 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.01
337 TestStartStop/group/newest-cni/serial/FirstStart 40.91
338 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
339 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
340 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.33
341 TestStartStop/group/embed-certs/serial/Pause 3.88
342 TestNetworkPlugins/group/auto/Start 100.04
343 TestStartStop/group/newest-cni/serial/DeployApp 0
344 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.42
345 TestStartStop/group/newest-cni/serial/Stop 1.35
346 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.25
347 TestStartStop/group/newest-cni/serial/SecondStart 22.59
348 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
349 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
350 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
351 TestStartStop/group/newest-cni/serial/Pause 2.93
352 TestNetworkPlugins/group/kindnet/Start 88.96
353 TestNetworkPlugins/group/auto/KubeletFlags 0.31
354 TestNetworkPlugins/group/auto/NetCatPod 9.28
355 TestNetworkPlugins/group/auto/DNS 0.19
356 TestNetworkPlugins/group/auto/Localhost 0.15
357 TestNetworkPlugins/group/auto/HairPin 0.17
358 TestNetworkPlugins/group/calico/Start 63.23
359 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
360 TestNetworkPlugins/group/kindnet/KubeletFlags 0.33
361 TestNetworkPlugins/group/kindnet/NetCatPod 10.34
362 TestNetworkPlugins/group/kindnet/DNS 0.25
363 TestNetworkPlugins/group/kindnet/Localhost 0.16
364 TestNetworkPlugins/group/kindnet/HairPin 0.18
365 TestNetworkPlugins/group/custom-flannel/Start 52.45
366 TestNetworkPlugins/group/calico/ControllerPod 6.01
367 TestNetworkPlugins/group/calico/KubeletFlags 0.38
368 TestNetworkPlugins/group/calico/NetCatPod 12.32
369 TestNetworkPlugins/group/calico/DNS 0.32
370 TestNetworkPlugins/group/calico/Localhost 0.21
371 TestNetworkPlugins/group/calico/HairPin 0.18
372 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
373 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.37
374 TestNetworkPlugins/group/enable-default-cni/Start 72.71
375 TestNetworkPlugins/group/custom-flannel/DNS 0.22
376 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
377 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
378 TestNetworkPlugins/group/flannel/Start 48.68
379 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
380 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.3
381 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
382 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
383 TestNetworkPlugins/group/flannel/ControllerPod 6.01
384 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
385 TestNetworkPlugins/group/flannel/KubeletFlags 0.41
386 TestNetworkPlugins/group/flannel/NetCatPod 11.35
387 TestNetworkPlugins/group/flannel/DNS 0.25
388 TestNetworkPlugins/group/flannel/Localhost 0.26
389 TestNetworkPlugins/group/flannel/HairPin 0.21
390 TestNetworkPlugins/group/bridge/Start 47.73
391 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
392 TestNetworkPlugins/group/bridge/NetCatPod 10.28
393 TestNetworkPlugins/group/bridge/DNS 0.16
394 TestNetworkPlugins/group/bridge/Localhost 0.16
395 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (7.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-938947 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-938947 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (7.158123715s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1028 10:40:46.811734 1319098 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I1028 10:40:46.811822 1319098 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19876-1313708/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-938947
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-938947: exit status 85 (63.201695ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-938947 | jenkins | v1.34.0 | 28 Oct 24 10:40 UTC |          |
	|         | -p download-only-938947        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 10:40:39
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.2 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 10:40:39.700965 1319103 out.go:345] Setting OutFile to fd 1 ...
	I1028 10:40:39.701080 1319103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 10:40:39.701086 1319103 out.go:358] Setting ErrFile to fd 2...
	I1028 10:40:39.701091 1319103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 10:40:39.701433 1319103 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-1313708/.minikube/bin
	W1028 10:40:39.701589 1319103 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19876-1313708/.minikube/config/config.json: open /home/jenkins/minikube-integration/19876-1313708/.minikube/config/config.json: no such file or directory
	I1028 10:40:39.702009 1319103 out.go:352] Setting JSON to true
	I1028 10:40:39.703157 1319103 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":145370,"bootTime":1729966670,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1028 10:40:39.703405 1319103 start.go:139] virtualization:  
	I1028 10:40:39.707327 1319103 out.go:97] [download-only-938947] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W1028 10:40:39.707503 1319103 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19876-1313708/.minikube/cache/preloaded-tarball: no such file or directory
	I1028 10:40:39.707555 1319103 notify.go:220] Checking for updates...
	I1028 10:40:39.710109 1319103 out.go:169] MINIKUBE_LOCATION=19876
	I1028 10:40:39.712817 1319103 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 10:40:39.715458 1319103 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19876-1313708/kubeconfig
	I1028 10:40:39.717998 1319103 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-1313708/.minikube
	I1028 10:40:39.720531 1319103 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1028 10:40:39.725763 1319103 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1028 10:40:39.726054 1319103 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 10:40:39.753194 1319103 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1028 10:40:39.753324 1319103 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 10:40:39.808409 1319103 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-28 10:40:39.798572734 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1028 10:40:39.808520 1319103 docker.go:318] overlay module found
	I1028 10:40:39.811337 1319103 out.go:97] Using the docker driver based on user configuration
	I1028 10:40:39.811365 1319103 start.go:297] selected driver: docker
	I1028 10:40:39.811372 1319103 start.go:901] validating driver "docker" against <nil>
	I1028 10:40:39.811478 1319103 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 10:40:39.860527 1319103 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-28 10:40:39.850998429 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1028 10:40:39.860746 1319103 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 10:40:39.861033 1319103 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1028 10:40:39.861191 1319103 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1028 10:40:39.864071 1319103 out.go:169] Using Docker driver with root privileges
	I1028 10:40:39.866601 1319103 cni.go:84] Creating CNI manager for ""
	I1028 10:40:39.866667 1319103 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1028 10:40:39.866678 1319103 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1028 10:40:39.866759 1319103 start.go:340] cluster config:
	{Name:download-only-938947 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-938947 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 10:40:39.869368 1319103 out.go:97] Starting "download-only-938947" primary control-plane node in "download-only-938947" cluster
	I1028 10:40:39.869402 1319103 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1028 10:40:39.872027 1319103 out.go:97] Pulling base image v0.0.45-1729876044-19868 ...
	I1028 10:40:39.872067 1319103 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1028 10:40:39.872179 1319103 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local docker daemon
	I1028 10:40:39.888355 1319103 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e to local cache
	I1028 10:40:39.889237 1319103 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local cache directory
	I1028 10:40:39.889337 1319103 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e to local cache
	I1028 10:40:39.933366 1319103 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I1028 10:40:39.933393 1319103 cache.go:56] Caching tarball of preloaded images
	I1028 10:40:39.933566 1319103 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1028 10:40:39.936679 1319103 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1028 10:40:39.936703 1319103 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I1028 10:40:40.043595 1319103 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19876-1313708/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I1028 10:40:43.970296 1319103 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e as a tarball
	
	
	* The control-plane node download-only-938947 host does not exist
	  To start a cluster, run: "minikube start -p download-only-938947"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-938947
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (4.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-469359 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-469359 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (4.993454047s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (4.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1028 10:40:52.210247 1319098 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
I1028 10:40:52.210290 1319098 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19876-1313708/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-469359
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-469359: exit status 85 (65.009494ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-938947 | jenkins | v1.34.0 | 28 Oct 24 10:40 UTC |                     |
	|         | -p download-only-938947        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 28 Oct 24 10:40 UTC | 28 Oct 24 10:40 UTC |
	| delete  | -p download-only-938947        | download-only-938947 | jenkins | v1.34.0 | 28 Oct 24 10:40 UTC | 28 Oct 24 10:40 UTC |
	| start   | -o=json --download-only        | download-only-469359 | jenkins | v1.34.0 | 28 Oct 24 10:40 UTC |                     |
	|         | -p download-only-469359        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 10:40:47
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.2 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 10:40:47.274946 1319308 out.go:345] Setting OutFile to fd 1 ...
	I1028 10:40:47.275068 1319308 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 10:40:47.275077 1319308 out.go:358] Setting ErrFile to fd 2...
	I1028 10:40:47.275083 1319308 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 10:40:47.275321 1319308 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-1313708/.minikube/bin
	I1028 10:40:47.275766 1319308 out.go:352] Setting JSON to true
	I1028 10:40:47.276572 1319308 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":145378,"bootTime":1729966670,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1028 10:40:47.276642 1319308 start.go:139] virtualization:  
	I1028 10:40:47.278382 1319308 out.go:97] [download-only-469359] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1028 10:40:47.278615 1319308 notify.go:220] Checking for updates...
	I1028 10:40:47.279994 1319308 out.go:169] MINIKUBE_LOCATION=19876
	I1028 10:40:47.281208 1319308 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 10:40:47.282423 1319308 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19876-1313708/kubeconfig
	I1028 10:40:47.283815 1319308 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-1313708/.minikube
	I1028 10:40:47.284894 1319308 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1028 10:40:47.287717 1319308 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1028 10:40:47.287985 1319308 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 10:40:47.309793 1319308 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1028 10:40:47.309921 1319308 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 10:40:47.376563 1319308 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-28 10:40:47.366698214 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1028 10:40:47.376673 1319308 docker.go:318] overlay module found
	I1028 10:40:47.378401 1319308 out.go:97] Using the docker driver based on user configuration
	I1028 10:40:47.378427 1319308 start.go:297] selected driver: docker
	I1028 10:40:47.378433 1319308 start.go:901] validating driver "docker" against <nil>
	I1028 10:40:47.378534 1319308 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 10:40:47.433860 1319308 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-28 10:40:47.424571149 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1028 10:40:47.434096 1319308 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 10:40:47.434394 1319308 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1028 10:40:47.434546 1319308 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1028 10:40:47.436921 1319308 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-469359 host does not exist
	  To start a cluster, run: "minikube start -p download-only-469359"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-469359
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
I1028 10:40:53.392192 1319098 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-357446 --alsologtostderr --binary-mirror http://127.0.0.1:45971 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-357446" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-357446
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-487046
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-487046: exit status 85 (65.493226ms)

                                                
                                                
-- stdout --
	* Profile "addons-487046" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-487046"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-487046
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-487046: exit status 85 (67.039307ms)

                                                
                                                
-- stdout --
	* Profile "addons-487046" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-487046"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (213.97s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-487046 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-487046 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m33.969683065s)
--- PASS: TestAddons/Setup (213.97s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.11s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:807: volcano-scheduler stabilized in 64.580068ms
addons_test.go:815: volcano-admission stabilized in 64.764356ms
addons_test.go:823: volcano-controller stabilized in 64.964906ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-2pqbl" [46f0da79-8f5e-463e-8378-875ee7804555] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004908596s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-4qtxq" [17368e10-7086-4576-acf0-466d3ba81905] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003361786s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-h2vkh" [c0786adb-47c6-42f3-80e8-d6dfff50afc4] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003631881s
addons_test.go:842: (dbg) Run:  kubectl --context addons-487046 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-487046 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-487046 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [014e79d6-f31b-4b5d-9bec-d97da6f93c7a] Pending
helpers_test.go:344: "test-job-nginx-0" [014e79d6-f31b-4b5d-9bec-d97da6f93c7a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [014e79d6-f31b-4b5d-9bec-d97da6f93c7a] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 10.004209364s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-487046 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-487046 addons disable volcano --alsologtostderr -v=1: (11.469894943s)
--- PASS: TestAddons/serial/Volcano (38.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-487046 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-487046 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.83s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-487046 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-487046 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f84bc142-a9de-4ae4-b77f-62a1baed9847] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f84bc142-a9de-4ae4-b77f-62a1baed9847] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.004354921s
addons_test.go:633: (dbg) Run:  kubectl --context addons-487046 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-487046 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-487046 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-487046 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.83s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 4.926846ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-spwq7" [635c9fae-0809-4f12-8b83-307b9fae4466] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003499968s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-dls2s" [7593c897-122f-4186-93ae-2c742b0a4850] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004268688s
addons_test.go:331: (dbg) Run:  kubectl --context addons-487046 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-487046 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-487046 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.399033381s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-487046 ip
2024/10/28 10:45:42 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-487046 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.48s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-487046 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-487046 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-487046 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [6db17842-7de8-460d-b96c-a6da509c7536] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [6db17842-7de8-460d-b96c-a6da509c7536] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003687627s
I1028 10:46:10.881614 1319098 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-487046 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-487046 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-487046 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-487046 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-487046 addons disable ingress-dns --alsologtostderr -v=1: (1.952250223s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-487046 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-487046 addons disable ingress --alsologtostderr -v=1: (7.941956485s)
--- PASS: TestAddons/parallel/Ingress (19.82s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.88s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-5sz5f" [f40a77dd-5bbf-488f-81fb-024d149e6b38] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003699382s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-487046 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-487046 addons disable inspektor-gadget --alsologtostderr -v=1: (5.875910081s)
--- PASS: TestAddons/parallel/InspektorGadget (11.88s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.83s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 2.556879ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-jxt4k" [5902cdad-3284-498c-890d-07cde7b9f4ee] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003452354s
addons_test.go:402: (dbg) Run:  kubectl --context addons-487046 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-487046 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.83s)

                                                
                                    
x
+
TestAddons/parallel/CSI (58.41s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1028 10:45:42.810179 1319098 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1028 10:45:42.821415 1319098 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1028 10:45:42.821446 1319098 kapi.go:107] duration metric: took 12.322717ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 12.334032ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-487046 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-487046 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-487046 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-487046 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-487046 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-487046 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-487046 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-487046 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-487046 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-487046 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-487046 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-487046 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-487046 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-487046 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [5e9b7456-c5aa-4c1f-934a-89feb67ee3e5] Pending
helpers_test.go:344: "task-pv-pod" [5e9b7456-c5aa-4c1f-934a-89feb67ee3e5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [5e9b7456-c5aa-4c1f-934a-89feb67ee3e5] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.004285898s
addons_test.go:511: (dbg) Run:  kubectl --context addons-487046 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-487046 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-487046 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-487046 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-487046 delete pod task-pv-pod: (1.332243606s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-487046 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-487046 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-487046 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-487046 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-487046 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-487046 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-487046 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-487046 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-487046 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-487046 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-487046 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-487046 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-487046 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-487046 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-487046 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-487046 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-487046 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-487046 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-487046 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-487046 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-487046 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-487046 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-487046 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-487046 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [23a928fe-2ab4-45cb-80c9-0f5d883fe017] Pending
helpers_test.go:344: "task-pv-pod-restore" [23a928fe-2ab4-45cb-80c9-0f5d883fe017] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [23a928fe-2ab4-45cb-80c9-0f5d883fe017] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003271556s
addons_test.go:553: (dbg) Run:  kubectl --context addons-487046 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-487046 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-487046 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-487046 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-487046 addons disable volumesnapshots --alsologtostderr -v=1: (1.015919613s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-487046 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-487046 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.788225485s)
--- PASS: TestAddons/parallel/CSI (58.41s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.05s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-487046 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-frr64" [7d7c61c4-a90b-4eb9-bb30-cbcf39dc6b1d] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-frr64" [7d7c61c4-a90b-4eb9-bb30-cbcf39dc6b1d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-frr64" [7d7c61c4-a90b-4eb9-bb30-cbcf39dc6b1d] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003253934s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-487046 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-487046 addons disable headlamp --alsologtostderr -v=1: (6.046126188s)
--- PASS: TestAddons/parallel/Headlamp (17.05s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-m9k8k" [eb83870c-ea9e-4e32-a640-b28e31022589] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003900081s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-487046 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.45s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-487046 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-487046 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-487046 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-487046 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-487046 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-487046 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-487046 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [f7cf6bba-b74d-47a2-af9f-440017288861] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [f7cf6bba-b74d-47a2-af9f-440017288861] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [f7cf6bba-b74d-47a2-af9f-440017288861] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003449889s
addons_test.go:906: (dbg) Run:  kubectl --context addons-487046 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-arm64 -p addons-487046 ssh "cat /opt/local-path-provisioner/pvc-0af32a63-a1a3-46b8-a72d-de5fdc011a39_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-487046 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-487046 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-487046 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (8.45s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-j44lp" [c309fcd5-5698-4055-80bd-6c7f225f5262] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003811204s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-487046 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-bx6gc" [b7310e7d-d367-4cbb-83b0-07e8f766a0f5] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00359832s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-487046 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-487046 addons disable yakd --alsologtostderr -v=1: (5.766716379s)
--- PASS: TestAddons/parallel/Yakd (11.77s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.34s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-487046
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-487046: (12.051705656s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-487046
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-487046
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-487046
--- PASS: TestAddons/StoppedEnableDisable (12.34s)

                                                
                                    
x
+
TestCertOptions (35.82s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-136781 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-136781 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (33.148792516s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-136781 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-136781 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-136781 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-136781" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-136781
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-136781: (2.015553241s)
--- PASS: TestCertOptions (35.82s)

                                                
                                    
x
+
TestCertExpiration (231.82s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-219316 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-219316 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (42.532979609s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-219316 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-219316 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.686042878s)
helpers_test.go:175: Cleaning up "cert-expiration-219316" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-219316
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-219316: (2.60242499s)
--- PASS: TestCertExpiration (231.82s)

                                                
                                    
x
+
TestForceSystemdFlag (33.01s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-121011 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-121011 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (30.614886841s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-121011 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-121011" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-121011
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-121011: (2.121870894s)
--- PASS: TestForceSystemdFlag (33.01s)

                                                
                                    
x
+
TestForceSystemdEnv (44.7s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-229837 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-229837 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (41.662990352s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-229837 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-229837" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-229837
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-229837: (2.661565281s)
--- PASS: TestForceSystemdEnv (44.70s)

                                                
                                    
x
+
TestDockerEnvContainerd (45.34s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-500329 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-500329 --driver=docker  --container-runtime=containerd: (29.926676537s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-500329"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-mpduoJKRe80K/agent.1340244" SSH_AGENT_PID="1340245" DOCKER_HOST=ssh://docker@127.0.0.1:40085 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-mpduoJKRe80K/agent.1340244" SSH_AGENT_PID="1340245" DOCKER_HOST=ssh://docker@127.0.0.1:40085 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-mpduoJKRe80K/agent.1340244" SSH_AGENT_PID="1340245" DOCKER_HOST=ssh://docker@127.0.0.1:40085 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.077292587s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-mpduoJKRe80K/agent.1340244" SSH_AGENT_PID="1340245" DOCKER_HOST=ssh://docker@127.0.0.1:40085 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-500329" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-500329
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-500329: (1.938201161s)
--- PASS: TestDockerEnvContainerd (45.34s)

                                                
                                    
x
+
TestErrorSpam/setup (28.12s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-860119 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-860119 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-860119 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-860119 --driver=docker  --container-runtime=containerd: (28.118292685s)
--- PASS: TestErrorSpam/setup (28.12s)

                                                
                                    
x
+
TestErrorSpam/start (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-860119 --log_dir /tmp/nospam-860119 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-860119 --log_dir /tmp/nospam-860119 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-860119 --log_dir /tmp/nospam-860119 start --dry-run
--- PASS: TestErrorSpam/start (0.73s)

                                                
                                    
x
+
TestErrorSpam/status (1.07s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-860119 --log_dir /tmp/nospam-860119 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-860119 --log_dir /tmp/nospam-860119 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-860119 --log_dir /tmp/nospam-860119 status
--- PASS: TestErrorSpam/status (1.07s)

                                                
                                    
x
+
TestErrorSpam/pause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-860119 --log_dir /tmp/nospam-860119 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-860119 --log_dir /tmp/nospam-860119 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-860119 --log_dir /tmp/nospam-860119 pause
--- PASS: TestErrorSpam/pause (1.81s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.04s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-860119 --log_dir /tmp/nospam-860119 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-860119 --log_dir /tmp/nospam-860119 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-860119 --log_dir /tmp/nospam-860119 unpause
--- PASS: TestErrorSpam/unpause (2.04s)

                                                
                                    
x
+
TestErrorSpam/stop (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-860119 --log_dir /tmp/nospam-860119 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-860119 --log_dir /tmp/nospam-860119 stop: (1.307046562s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-860119 --log_dir /tmp/nospam-860119 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-860119 --log_dir /tmp/nospam-860119 stop
--- PASS: TestErrorSpam/stop (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19876-1313708/.minikube/files/etc/test/nested/copy/1319098/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (52.2s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-355847 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-355847 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (52.198787087s)
--- PASS: TestFunctional/serial/StartWithProxy (52.20s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.99s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1028 10:49:24.264289 1319098 config.go:182] Loaded profile config "functional-355847": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-355847 --alsologtostderr -v=8
E1028 10:49:28.002543 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/addons-487046/client.crt: no such file or directory" logger="UnhandledError"
E1028 10:49:28.009695 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/addons-487046/client.crt: no such file or directory" logger="UnhandledError"
E1028 10:49:28.021089 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/addons-487046/client.crt: no such file or directory" logger="UnhandledError"
E1028 10:49:28.042796 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/addons-487046/client.crt: no such file or directory" logger="UnhandledError"
E1028 10:49:28.083997 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/addons-487046/client.crt: no such file or directory" logger="UnhandledError"
E1028 10:49:28.165538 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/addons-487046/client.crt: no such file or directory" logger="UnhandledError"
E1028 10:49:28.327514 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/addons-487046/client.crt: no such file or directory" logger="UnhandledError"
E1028 10:49:28.648760 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/addons-487046/client.crt: no such file or directory" logger="UnhandledError"
E1028 10:49:29.290550 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/addons-487046/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-355847 --alsologtostderr -v=8: (5.986191653s)
functional_test.go:663: soft start took 5.988053956s for "functional-355847" cluster.
I1028 10:49:30.250788 1319098 config.go:182] Loaded profile config "functional-355847": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (5.99s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-355847 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 cache add registry.k8s.io/pause:3.1
E1028 10:49:30.572722 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/addons-487046/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-355847 cache add registry.k8s.io/pause:3.1: (1.478404059s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 cache add registry.k8s.io/pause:3.3
E1028 10:49:33.134090 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/addons-487046/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-355847 cache add registry.k8s.io/pause:3.3: (1.348209674s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-355847 cache add registry.k8s.io/pause:latest: (1.175687907s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-355847 /tmp/TestFunctionalserialCacheCmdcacheadd_local2825288980/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 cache add minikube-local-cache-test:functional-355847
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 cache delete minikube-local-cache-test:functional-355847
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-355847
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-355847 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (283.509996ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-355847 cache reload: (1.102927767s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.01s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 kubectl -- --context functional-355847 get pods
E1028 10:49:38.256535 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/addons-487046/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-355847 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (58.23s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-355847 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1028 10:49:48.498112 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/addons-487046/client.crt: no such file or directory" logger="UnhandledError"
E1028 10:50:08.979773 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/addons-487046/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-355847 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (58.227437393s)
functional_test.go:761: restart took 58.22754529s for "functional-355847" cluster.
I1028 10:50:36.717862 1319098 config.go:182] Loaded profile config "functional-355847": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (58.23s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-355847 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-355847 logs: (1.678952839s)
--- PASS: TestFunctional/serial/LogsCmd (1.68s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 logs --file /tmp/TestFunctionalserialLogsFileCmd2284130316/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-355847 logs --file /tmp/TestFunctionalserialLogsFileCmd2284130316/001/logs.txt: (1.699171308s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.70s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.36s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-355847 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-355847
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-355847: exit status 115 (527.898929ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32720 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-355847 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.36s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-355847 config get cpus: exit status 14 (82.993403ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-355847 config get cpus: exit status 14 (81.517029ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-355847 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-355847 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1355369: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.57s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-355847 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-355847 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (194.900601ms)

                                                
                                                
-- stdout --
	* [functional-355847] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19876-1313708/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-1313708/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 10:51:16.612836 1355078 out.go:345] Setting OutFile to fd 1 ...
	I1028 10:51:16.612969 1355078 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 10:51:16.612980 1355078 out.go:358] Setting ErrFile to fd 2...
	I1028 10:51:16.612986 1355078 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 10:51:16.613216 1355078 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-1313708/.minikube/bin
	I1028 10:51:16.613683 1355078 out.go:352] Setting JSON to false
	I1028 10:51:16.614719 1355078 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":146007,"bootTime":1729966670,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1028 10:51:16.614787 1355078 start.go:139] virtualization:  
	I1028 10:51:16.617905 1355078 out.go:177] * [functional-355847] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1028 10:51:16.621274 1355078 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 10:51:16.621341 1355078 notify.go:220] Checking for updates...
	I1028 10:51:16.626685 1355078 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 10:51:16.629302 1355078 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19876-1313708/kubeconfig
	I1028 10:51:16.631907 1355078 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-1313708/.minikube
	I1028 10:51:16.634537 1355078 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1028 10:51:16.637353 1355078 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 10:51:16.640482 1355078 config.go:182] Loaded profile config "functional-355847": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1028 10:51:16.640990 1355078 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 10:51:16.672291 1355078 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1028 10:51:16.672428 1355078 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 10:51:16.732989 1355078 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-28 10:51:16.723537181 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1028 10:51:16.733098 1355078 docker.go:318] overlay module found
	I1028 10:51:16.735980 1355078 out.go:177] * Using the docker driver based on existing profile
	I1028 10:51:16.738847 1355078 start.go:297] selected driver: docker
	I1028 10:51:16.738869 1355078 start.go:901] validating driver "docker" against &{Name:functional-355847 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-355847 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 10:51:16.738989 1355078 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 10:51:16.742412 1355078 out.go:201] 
	W1028 10:51:16.744928 1355078 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1028 10:51:16.747484 1355078 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-355847 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-355847 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-355847 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (203.876999ms)

                                                
                                                
-- stdout --
	* [functional-355847] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19876-1313708/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-1313708/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 10:51:16.420755 1355013 out.go:345] Setting OutFile to fd 1 ...
	I1028 10:51:16.420891 1355013 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 10:51:16.420897 1355013 out.go:358] Setting ErrFile to fd 2...
	I1028 10:51:16.420902 1355013 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 10:51:16.422043 1355013 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-1313708/.minikube/bin
	I1028 10:51:16.422506 1355013 out.go:352] Setting JSON to false
	I1028 10:51:16.423500 1355013 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":146007,"bootTime":1729966670,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1028 10:51:16.423577 1355013 start.go:139] virtualization:  
	I1028 10:51:16.425840 1355013 out.go:177] * [functional-355847] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I1028 10:51:16.427572 1355013 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 10:51:16.427703 1355013 notify.go:220] Checking for updates...
	I1028 10:51:16.431969 1355013 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 10:51:16.434201 1355013 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19876-1313708/kubeconfig
	I1028 10:51:16.437005 1355013 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-1313708/.minikube
	I1028 10:51:16.439740 1355013 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1028 10:51:16.442463 1355013 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 10:51:16.445247 1355013 config.go:182] Loaded profile config "functional-355847": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1028 10:51:16.445808 1355013 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 10:51:16.471743 1355013 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1028 10:51:16.471911 1355013 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 10:51:16.538905 1355013 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-28 10:51:16.529398959 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1028 10:51:16.539020 1355013 docker.go:318] overlay module found
	I1028 10:51:16.541600 1355013 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1028 10:51:16.543940 1355013 start.go:297] selected driver: docker
	I1028 10:51:16.543961 1355013 start.go:901] validating driver "docker" against &{Name:functional-355847 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-355847 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 10:51:16.544066 1355013 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 10:51:16.547132 1355013 out.go:201] 
	W1028 10:51:16.549595 1355013 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1028 10:51:16.552070 1355013 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-355847 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-355847 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-m49zr" [d7ee1c51-8239-4991-88a7-9d5836ea3f26] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-m49zr" [d7ee1c51-8239-4991-88a7-9d5836ea3f26] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003547867s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31926
functional_test.go:1675: http://192.168.49.2:31926: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-m49zr

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31926
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.62s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [72a3550b-de6a-4476-8274-3f39a5460993] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004363601s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-355847 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-355847 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-355847 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-355847 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8dd786d7-42a5-47d9-8c1f-37f03fd55758] Pending
helpers_test.go:344: "sp-pod" [8dd786d7-42a5-47d9-8c1f-37f03fd55758] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8dd786d7-42a5-47d9-8c1f-37f03fd55758] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.004088586s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-355847 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-355847 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-355847 delete -f testdata/storage-provisioner/pod.yaml: (1.034120478s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-355847 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [83d15287-523d-4044-8dd2-6a8aa385a84f] Pending
helpers_test.go:344: "sp-pod" [83d15287-523d-4044-8dd2-6a8aa385a84f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [83d15287-523d-4044-8dd2-6a8aa385a84f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004613392s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-355847 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.99s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 ssh -n functional-355847 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 cp functional-355847:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd118726002/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 ssh -n functional-355847 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 ssh -n functional-355847 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.35s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1319098/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 ssh "sudo cat /etc/test/nested/copy/1319098/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1319098.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 ssh "sudo cat /etc/ssl/certs/1319098.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1319098.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 ssh "sudo cat /usr/share/ca-certificates/1319098.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/13190982.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 ssh "sudo cat /etc/ssl/certs/13190982.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/13190982.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 ssh "sudo cat /usr/share/ca-certificates/13190982.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-355847 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-355847 ssh "sudo systemctl is-active docker": exit status 1 (288.36768ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-355847 ssh "sudo systemctl is-active crio": exit status 1 (278.001268ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-355847 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-355847 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-355847 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-355847 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1352651: os: process already finished
helpers_test.go:502: unable to terminate pid 1352439: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-355847 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-355847 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [8d301b8a-ec28-4efd-b672-5e93844e0ede] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [8d301b8a-ec28-4efd-b672-5e93844e0ede] Running
E1028 10:50:49.941946 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/addons-487046/client.crt: no such file or directory" logger="UnhandledError"
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003674248s
I1028 10:50:55.792976 1319098 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-355847 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.95.36 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-355847 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-355847 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-355847 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-fhctv" [60e114fd-4fc3-4ffa-8013-f585f2d5a5cc] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-fhctv" [60e114fd-4fc3-4ffa-8013-f585f2d5a5cc] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003343074s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "357.212012ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "63.841921ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "360.517638ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "57.575563ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-355847 /tmp/TestFunctionalparallelMountCmdany-port3703211792/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1730112672159803934" to /tmp/TestFunctionalparallelMountCmdany-port3703211792/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1730112672159803934" to /tmp/TestFunctionalparallelMountCmdany-port3703211792/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1730112672159803934" to /tmp/TestFunctionalparallelMountCmdany-port3703211792/001/test-1730112672159803934
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-355847 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (342.424961ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1028 10:51:12.503227 1319098 retry.go:31] will retry after 314.388608ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 28 10:51 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 28 10:51 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 28 10:51 test-1730112672159803934
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 ssh cat /mount-9p/test-1730112672159803934
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-355847 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [b2102257-b28f-4d8c-906c-7e5cda872eb0] Pending
helpers_test.go:344: "busybox-mount" [b2102257-b28f-4d8c-906c-7e5cda872eb0] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [b2102257-b28f-4d8c-906c-7e5cda872eb0] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [b2102257-b28f-4d8c-906c-7e5cda872eb0] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003730227s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-355847 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-355847 /tmp/TestFunctionalparallelMountCmdany-port3703211792/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.95s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 service list -o json
functional_test.go:1494: Took "580.855492ms" to run "out/minikube-linux-arm64 -p functional-355847 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30609
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30609
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-355847 /tmp/TestFunctionalparallelMountCmdspecific-port221373758/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-355847 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (456.380775ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1028 10:51:20.562938 1319098 retry.go:31] will retry after 334.996604ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-355847 /tmp/TestFunctionalparallelMountCmdspecific-port221373758/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-355847 ssh "sudo umount -f /mount-9p": exit status 1 (315.995098ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-355847 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-355847 /tmp/TestFunctionalparallelMountCmdspecific-port221373758/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-355847 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2821999590/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-355847 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2821999590/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-355847 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2821999590/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-355847 ssh "findmnt -T" /mount1: exit status 1 (964.697208ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1028 10:51:22.994474 1319098 retry.go:31] will retry after 596.168485ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-355847 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-355847 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2821999590/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-355847 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2821999590/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-355847 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2821999590/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.68s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-355847 version -o=json --components: (1.269371392s)
--- PASS: TestFunctional/parallel/Version/components (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-355847 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-355847
docker.io/kindest/kindnetd:v20241007-36f62932
docker.io/kicbase/echo-server:functional-355847
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-355847 image ls --format short --alsologtostderr:
I1028 10:51:32.504216 1357888 out.go:345] Setting OutFile to fd 1 ...
I1028 10:51:32.504396 1357888 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 10:51:32.504409 1357888 out.go:358] Setting ErrFile to fd 2...
I1028 10:51:32.504416 1357888 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 10:51:32.504707 1357888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-1313708/.minikube/bin
I1028 10:51:32.505478 1357888 config.go:182] Loaded profile config "functional-355847": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1028 10:51:32.505630 1357888 config.go:182] Loaded profile config "functional-355847": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1028 10:51:32.506196 1357888 cli_runner.go:164] Run: docker container inspect functional-355847 --format={{.State.Status}}
I1028 10:51:32.525761 1357888 ssh_runner.go:195] Run: systemctl --version
I1028 10:51:32.525824 1357888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-355847
I1028 10:51:32.546626 1357888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40095 SSHKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/functional-355847/id_rsa Username:docker}
I1028 10:51:32.640257 1357888 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-355847 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:2f6c96 | 16.9MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| docker.io/library/nginx                     | latest             | sha256:4b1965 | 69.6MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/kube-apiserver              | v1.31.2            | sha256:f9c264 | 25.6MB |
| registry.k8s.io/kube-scheduler              | v1.31.2            | sha256:d6b061 | 18.4MB |
| docker.io/library/minikube-local-cache-test | functional-355847  | sha256:0de9fb | 989B   |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/kicbase/echo-server               | functional-355847  | sha256:ce2d2c | 2.17MB |
| docker.io/library/nginx                     | alpine             | sha256:577a23 | 21.5MB |
| registry.k8s.io/kube-controller-manager     | v1.31.2            | sha256:9404ae | 23.9MB |
| docker.io/kindest/kindnetd                  | v20241007-36f62932 | sha256:0bcd66 | 35.3MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| registry.k8s.io/kube-proxy                  | v1.31.2            | sha256:021d24 | 26.8MB |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-355847 image ls --format table --alsologtostderr:
I1028 10:51:32.794436 1357955 out.go:345] Setting OutFile to fd 1 ...
I1028 10:51:32.794586 1357955 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 10:51:32.794597 1357955 out.go:358] Setting ErrFile to fd 2...
I1028 10:51:32.794603 1357955 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 10:51:32.794907 1357955 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-1313708/.minikube/bin
I1028 10:51:32.795644 1357955 config.go:182] Loaded profile config "functional-355847": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1028 10:51:32.795793 1357955 config.go:182] Loaded profile config "functional-355847": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1028 10:51:32.796324 1357955 cli_runner.go:164] Run: docker container inspect functional-355847 --format={{.State.Status}}
I1028 10:51:32.827383 1357955 ssh_runner.go:195] Run: systemctl --version
I1028 10:51:32.827444 1357955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-355847
I1028 10:51:32.857706 1357955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40095 SSHKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/functional-355847/id_rsa Username:docker}
I1028 10:51:32.952734 1357955 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-355847 image ls --format json --alsologtostderr:
[{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:0bcd66b03df5f1498fba5b90226939f5993cfba4c8379438bd8e89f3b4a70baa","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"35320503"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDiges
ts":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431","repoDigests":["docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250"],"repoTags":["docker.io/library/nginx:alpine"],"size":"21533923"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"18429679"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6
471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-355847"],"size":"2173567"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"23872272"},{"id":"sha256:021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba","repoDigests":["registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"],"rep
oTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"26768683"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:4b196525bd3cc6aa7a72ba63c6c2ae6d957b57edd603a7070c5e31f8e63c51f9","repoDigests":["docker.io/library/nginx@sha256:28402db69fec7c17e179ea87882667f1e054391138f77ffaf0c3eb388efc3ffb"],"repoTags":["docker.io/library/nginx:latest"],"size":"69600252"},{"id":"sha256:f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"25612805"},{"id":"sha256:0de9fb1f1bb1a4548e78cdd1b8170a7ff3ad26619a9979641d7009ce4806e75a","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-355847"],"size":"989"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","re
poDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"16948420"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-355847 image ls --format json --alsologtostderr:
I1028 10:51:32.790780 1357954 out.go:345] Setting OutFile to fd 1 ...
I1028 10:51:32.790981 1357954 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 10:51:32.791010 1357954 out.go:358] Setting ErrFile to fd 2...
I1028 10:51:32.791032 1357954 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 10:51:32.791385 1357954 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-1313708/.minikube/bin
I1028 10:51:32.792132 1357954 config.go:182] Loaded profile config "functional-355847": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1028 10:51:32.792304 1357954 config.go:182] Loaded profile config "functional-355847": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1028 10:51:32.792852 1357954 cli_runner.go:164] Run: docker container inspect functional-355847 --format={{.State.Status}}
I1028 10:51:32.812665 1357954 ssh_runner.go:195] Run: systemctl --version
I1028 10:51:32.812718 1357954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-355847
I1028 10:51:32.842577 1357954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40095 SSHKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/functional-355847/id_rsa Username:docker}
I1028 10:51:32.932083 1357954 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-355847 image ls --format yaml --alsologtostderr:
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "25612805"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-355847
size: "2173567"
- id: sha256:0de9fb1f1bb1a4548e78cdd1b8170a7ff3ad26619a9979641d7009ce4806e75a
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-355847
size: "989"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba
repoDigests:
- registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "26768683"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "18429679"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:0bcd66b03df5f1498fba5b90226939f5993cfba4c8379438bd8e89f3b4a70baa
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "35320503"
- id: sha256:577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431
repoDigests:
- docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250
repoTags:
- docker.io/library/nginx:alpine
size: "21533923"
- id: sha256:4b196525bd3cc6aa7a72ba63c6c2ae6d957b57edd603a7070c5e31f8e63c51f9
repoDigests:
- docker.io/library/nginx@sha256:28402db69fec7c17e179ea87882667f1e054391138f77ffaf0c3eb388efc3ffb
repoTags:
- docker.io/library/nginx:latest
size: "69600252"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "16948420"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "23872272"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-355847 image ls --format yaml --alsologtostderr:
I1028 10:51:32.507495 1357889 out.go:345] Setting OutFile to fd 1 ...
I1028 10:51:32.507727 1357889 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 10:51:32.507760 1357889 out.go:358] Setting ErrFile to fd 2...
I1028 10:51:32.507780 1357889 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 10:51:32.508088 1357889 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-1313708/.minikube/bin
I1028 10:51:32.508757 1357889 config.go:182] Loaded profile config "functional-355847": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1028 10:51:32.508914 1357889 config.go:182] Loaded profile config "functional-355847": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1028 10:51:32.509446 1357889 cli_runner.go:164] Run: docker container inspect functional-355847 --format={{.State.Status}}
I1028 10:51:32.527262 1357889 ssh_runner.go:195] Run: systemctl --version
I1028 10:51:32.527309 1357889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-355847
I1028 10:51:32.551865 1357889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40095 SSHKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/functional-355847/id_rsa Username:docker}
I1028 10:51:32.645173 1357889 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-355847 ssh pgrep buildkitd: exit status 1 (266.02716ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 image build -t localhost/my-image:functional-355847 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-355847 image build -t localhost/my-image:functional-355847 testdata/build --alsologtostderr: (3.008423038s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-355847 image build -t localhost/my-image:functional-355847 testdata/build --alsologtostderr:
I1028 10:51:33.306745 1358079 out.go:345] Setting OutFile to fd 1 ...
I1028 10:51:33.307763 1358079 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 10:51:33.307805 1358079 out.go:358] Setting ErrFile to fd 2...
I1028 10:51:33.307828 1358079 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 10:51:33.308203 1358079 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-1313708/.minikube/bin
I1028 10:51:33.309949 1358079 config.go:182] Loaded profile config "functional-355847": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1028 10:51:33.311996 1358079 config.go:182] Loaded profile config "functional-355847": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1028 10:51:33.312518 1358079 cli_runner.go:164] Run: docker container inspect functional-355847 --format={{.State.Status}}
I1028 10:51:33.328761 1358079 ssh_runner.go:195] Run: systemctl --version
I1028 10:51:33.328824 1358079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-355847
I1028 10:51:33.344842 1358079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40095 SSHKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/functional-355847/id_rsa Username:docker}
I1028 10:51:33.432047 1358079 build_images.go:161] Building image from path: /tmp/build.3581464575.tar
I1028 10:51:33.432153 1358079 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1028 10:51:33.440755 1358079 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3581464575.tar
I1028 10:51:33.444107 1358079 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3581464575.tar: stat -c "%s %y" /var/lib/minikube/build/build.3581464575.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3581464575.tar': No such file or directory
I1028 10:51:33.444137 1358079 ssh_runner.go:362] scp /tmp/build.3581464575.tar --> /var/lib/minikube/build/build.3581464575.tar (3072 bytes)
I1028 10:51:33.468312 1358079 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3581464575
I1028 10:51:33.477240 1358079 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3581464575 -xf /var/lib/minikube/build/build.3581464575.tar
I1028 10:51:33.487127 1358079 containerd.go:394] Building image: /var/lib/minikube/build/build.3581464575
I1028 10:51:33.487201 1358079 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3581464575 --local dockerfile=/var/lib/minikube/build/build.3581464575 --output type=image,name=localhost/my-image:functional-355847
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:fc9b8b523a57742c2ccfa3f6832de8f7b8816982183f21ce27013ebcf42d3bb5
#8 exporting manifest sha256:fc9b8b523a57742c2ccfa3f6832de8f7b8816982183f21ce27013ebcf42d3bb5 0.0s done
#8 exporting config sha256:4594f92e7f7f1ec60dab5fd98aaa11189eed807b138d85b7c53f3c6fea020283 0.0s done
#8 naming to localhost/my-image:functional-355847 done
#8 DONE 0.2s
I1028 10:51:36.238255 1358079 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3581464575 --local dockerfile=/var/lib/minikube/build/build.3581464575 --output type=image,name=localhost/my-image:functional-355847: (2.751024402s)
I1028 10:51:36.238325 1358079 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3581464575
I1028 10:51:36.247510 1358079 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3581464575.tar
I1028 10:51:36.256281 1358079 build_images.go:217] Built localhost/my-image:functional-355847 from /tmp/build.3581464575.tar
I1028 10:51:36.256317 1358079 build_images.go:133] succeeded building to: functional-355847
I1028 10:51:36.256323 1358079 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-355847
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 image load --daemon kicbase/echo-server:functional-355847 --alsologtostderr
2024/10/28 10:51:26 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-355847 image load --daemon kicbase/echo-server:functional-355847 --alsologtostderr: (1.080548539s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 image load --daemon kicbase/echo-server:functional-355847 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-355847 image load --daemon kicbase/echo-server:functional-355847 --alsologtostderr: (1.077009382s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-355847
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 image load --daemon kicbase/echo-server:functional-355847 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 image save kicbase/echo-server:functional-355847 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 image rm kicbase/echo-server:functional-355847 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-355847
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-355847 image save --daemon kicbase/echo-server:functional-355847 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-355847
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-355847
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-355847
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-355847
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (116.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-372281 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E1028 10:52:11.863850 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/addons-487046/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-372281 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m55.861506912s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (116.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (34.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-372281 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-372281 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-372281 -- rollout status deployment/busybox: (31.562248787s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-372281 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-372281 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-372281 -- exec busybox-7dff88458-54vcd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-372281 -- exec busybox-7dff88458-5sbhx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-372281 -- exec busybox-7dff88458-5wvk4 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-372281 -- exec busybox-7dff88458-54vcd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-372281 -- exec busybox-7dff88458-5sbhx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-372281 -- exec busybox-7dff88458-5wvk4 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-372281 -- exec busybox-7dff88458-54vcd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-372281 -- exec busybox-7dff88458-5sbhx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-372281 -- exec busybox-7dff88458-5wvk4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (34.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-372281 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-372281 -- exec busybox-7dff88458-54vcd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-372281 -- exec busybox-7dff88458-54vcd -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-372281 -- exec busybox-7dff88458-5sbhx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-372281 -- exec busybox-7dff88458-5sbhx -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-372281 -- exec busybox-7dff88458-5wvk4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-372281 -- exec busybox-7dff88458-5wvk4 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (21.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-372281 -v=7 --alsologtostderr
E1028 10:54:28.000971 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/addons-487046/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-372281 -v=7 --alsologtostderr: (20.86082618s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (21.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-372281 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 cp testdata/cp-test.txt ha-372281:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 ssh -n ha-372281 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 cp ha-372281:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile929831559/001/cp-test_ha-372281.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 ssh -n ha-372281 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 cp ha-372281:/home/docker/cp-test.txt ha-372281-m02:/home/docker/cp-test_ha-372281_ha-372281-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 ssh -n ha-372281 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 ssh -n ha-372281-m02 "sudo cat /home/docker/cp-test_ha-372281_ha-372281-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 cp ha-372281:/home/docker/cp-test.txt ha-372281-m03:/home/docker/cp-test_ha-372281_ha-372281-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 ssh -n ha-372281 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 ssh -n ha-372281-m03 "sudo cat /home/docker/cp-test_ha-372281_ha-372281-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 cp ha-372281:/home/docker/cp-test.txt ha-372281-m04:/home/docker/cp-test_ha-372281_ha-372281-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 ssh -n ha-372281 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 ssh -n ha-372281-m04 "sudo cat /home/docker/cp-test_ha-372281_ha-372281-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 cp testdata/cp-test.txt ha-372281-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 ssh -n ha-372281-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 cp ha-372281-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile929831559/001/cp-test_ha-372281-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 ssh -n ha-372281-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 cp ha-372281-m02:/home/docker/cp-test.txt ha-372281:/home/docker/cp-test_ha-372281-m02_ha-372281.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 ssh -n ha-372281-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 ssh -n ha-372281 "sudo cat /home/docker/cp-test_ha-372281-m02_ha-372281.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 cp ha-372281-m02:/home/docker/cp-test.txt ha-372281-m03:/home/docker/cp-test_ha-372281-m02_ha-372281-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 ssh -n ha-372281-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 ssh -n ha-372281-m03 "sudo cat /home/docker/cp-test_ha-372281-m02_ha-372281-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 cp ha-372281-m02:/home/docker/cp-test.txt ha-372281-m04:/home/docker/cp-test_ha-372281-m02_ha-372281-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 ssh -n ha-372281-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 ssh -n ha-372281-m04 "sudo cat /home/docker/cp-test_ha-372281-m02_ha-372281-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 cp testdata/cp-test.txt ha-372281-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 ssh -n ha-372281-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 cp ha-372281-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile929831559/001/cp-test_ha-372281-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 ssh -n ha-372281-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 cp ha-372281-m03:/home/docker/cp-test.txt ha-372281:/home/docker/cp-test_ha-372281-m03_ha-372281.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 ssh -n ha-372281-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 ssh -n ha-372281 "sudo cat /home/docker/cp-test_ha-372281-m03_ha-372281.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 cp ha-372281-m03:/home/docker/cp-test.txt ha-372281-m02:/home/docker/cp-test_ha-372281-m03_ha-372281-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 ssh -n ha-372281-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 ssh -n ha-372281-m02 "sudo cat /home/docker/cp-test_ha-372281-m03_ha-372281-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 cp ha-372281-m03:/home/docker/cp-test.txt ha-372281-m04:/home/docker/cp-test_ha-372281-m03_ha-372281-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 ssh -n ha-372281-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 ssh -n ha-372281-m04 "sudo cat /home/docker/cp-test_ha-372281-m03_ha-372281-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 cp testdata/cp-test.txt ha-372281-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 ssh -n ha-372281-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 cp ha-372281-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile929831559/001/cp-test_ha-372281-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 ssh -n ha-372281-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 cp ha-372281-m04:/home/docker/cp-test.txt ha-372281:/home/docker/cp-test_ha-372281-m04_ha-372281.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 ssh -n ha-372281-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 ssh -n ha-372281 "sudo cat /home/docker/cp-test_ha-372281-m04_ha-372281.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 cp ha-372281-m04:/home/docker/cp-test.txt ha-372281-m02:/home/docker/cp-test_ha-372281-m04_ha-372281-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 ssh -n ha-372281-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 ssh -n ha-372281-m02 "sudo cat /home/docker/cp-test_ha-372281-m04_ha-372281-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 cp ha-372281-m04:/home/docker/cp-test.txt ha-372281-m03:/home/docker/cp-test_ha-372281-m04_ha-372281-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 ssh -n ha-372281-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 ssh -n ha-372281-m03 "sudo cat /home/docker/cp-test_ha-372281-m04_ha-372281-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 node stop m02 -v=7 --alsologtostderr
E1028 10:54:55.705462 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/addons-487046/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-372281 node stop m02 -v=7 --alsologtostderr: (12.132501282s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-372281 status -v=7 --alsologtostderr: exit status 7 (730.595031ms)

                                                
                                                
-- stdout --
	ha-372281
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-372281-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-372281-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-372281-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 10:55:05.407134 1374255 out.go:345] Setting OutFile to fd 1 ...
	I1028 10:55:05.407341 1374255 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 10:55:05.407367 1374255 out.go:358] Setting ErrFile to fd 2...
	I1028 10:55:05.407386 1374255 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 10:55:05.407715 1374255 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-1313708/.minikube/bin
	I1028 10:55:05.407962 1374255 out.go:352] Setting JSON to false
	I1028 10:55:05.408023 1374255 mustload.go:65] Loading cluster: ha-372281
	I1028 10:55:05.408068 1374255 notify.go:220] Checking for updates...
	I1028 10:55:05.408520 1374255 config.go:182] Loaded profile config "ha-372281": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1028 10:55:05.408561 1374255 status.go:174] checking status of ha-372281 ...
	I1028 10:55:05.409187 1374255 cli_runner.go:164] Run: docker container inspect ha-372281 --format={{.State.Status}}
	I1028 10:55:05.430696 1374255 status.go:371] ha-372281 host status = "Running" (err=<nil>)
	I1028 10:55:05.430720 1374255 host.go:66] Checking if "ha-372281" exists ...
	I1028 10:55:05.431027 1374255 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-372281
	I1028 10:55:05.455473 1374255 host.go:66] Checking if "ha-372281" exists ...
	I1028 10:55:05.455828 1374255 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1028 10:55:05.455889 1374255 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-372281
	I1028 10:55:05.474861 1374255 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40100 SSHKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/ha-372281/id_rsa Username:docker}
	I1028 10:55:05.565540 1374255 ssh_runner.go:195] Run: systemctl --version
	I1028 10:55:05.570158 1374255 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 10:55:05.589284 1374255 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 10:55:05.661069 1374255 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:48 OomKillDisable:true NGoroutines:71 SystemTime:2024-10-28 10:55:05.648409069 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1028 10:55:05.662023 1374255 kubeconfig.go:125] found "ha-372281" server: "https://192.168.49.254:8443"
	I1028 10:55:05.662061 1374255 api_server.go:166] Checking apiserver status ...
	I1028 10:55:05.662112 1374255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 10:55:05.673935 1374255 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1505/cgroup
	I1028 10:55:05.683520 1374255 api_server.go:182] apiserver freezer: "6:freezer:/docker/bedf6d4a713221604a2f90250b76edb059c74ce7442bde690d0570117b89f561/kubepods/burstable/pod47197700ab14c5acb5b9074cad5c7889/42e03ca2ce09ddf3dbfb8976ea54e46850e42df4055942dca776eaa0ea139009"
	I1028 10:55:05.683589 1374255 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/bedf6d4a713221604a2f90250b76edb059c74ce7442bde690d0570117b89f561/kubepods/burstable/pod47197700ab14c5acb5b9074cad5c7889/42e03ca2ce09ddf3dbfb8976ea54e46850e42df4055942dca776eaa0ea139009/freezer.state
	I1028 10:55:05.692526 1374255 api_server.go:204] freezer state: "THAWED"
	I1028 10:55:05.692557 1374255 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1028 10:55:05.700732 1374255 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1028 10:55:05.700760 1374255 status.go:463] ha-372281 apiserver status = Running (err=<nil>)
	I1028 10:55:05.700771 1374255 status.go:176] ha-372281 status: &{Name:ha-372281 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1028 10:55:05.700788 1374255 status.go:174] checking status of ha-372281-m02 ...
	I1028 10:55:05.701094 1374255 cli_runner.go:164] Run: docker container inspect ha-372281-m02 --format={{.State.Status}}
	I1028 10:55:05.720452 1374255 status.go:371] ha-372281-m02 host status = "Stopped" (err=<nil>)
	I1028 10:55:05.720476 1374255 status.go:384] host is not running, skipping remaining checks
	I1028 10:55:05.720483 1374255 status.go:176] ha-372281-m02 status: &{Name:ha-372281-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1028 10:55:05.720504 1374255 status.go:174] checking status of ha-372281-m03 ...
	I1028 10:55:05.720808 1374255 cli_runner.go:164] Run: docker container inspect ha-372281-m03 --format={{.State.Status}}
	I1028 10:55:05.744371 1374255 status.go:371] ha-372281-m03 host status = "Running" (err=<nil>)
	I1028 10:55:05.744393 1374255 host.go:66] Checking if "ha-372281-m03" exists ...
	I1028 10:55:05.744678 1374255 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-372281-m03
	I1028 10:55:05.763332 1374255 host.go:66] Checking if "ha-372281-m03" exists ...
	I1028 10:55:05.763684 1374255 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1028 10:55:05.763733 1374255 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-372281-m03
	I1028 10:55:05.780196 1374255 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40110 SSHKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/ha-372281-m03/id_rsa Username:docker}
	I1028 10:55:05.876260 1374255 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 10:55:05.888596 1374255 kubeconfig.go:125] found "ha-372281" server: "https://192.168.49.254:8443"
	I1028 10:55:05.888682 1374255 api_server.go:166] Checking apiserver status ...
	I1028 10:55:05.888752 1374255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 10:55:05.899801 1374255 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1347/cgroup
	I1028 10:55:05.909397 1374255 api_server.go:182] apiserver freezer: "6:freezer:/docker/704832ba6237e854ddf71af05446503027aca55fb42625810c45369f1537982a/kubepods/burstable/pod6484a10514c2a35ab71bdce9b2b582b1/32e00b65a72c324623b47765a57bd8ac6808fec8a6cd5430b579bc2cfb745343"
	I1028 10:55:05.909546 1374255 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/704832ba6237e854ddf71af05446503027aca55fb42625810c45369f1537982a/kubepods/burstable/pod6484a10514c2a35ab71bdce9b2b582b1/32e00b65a72c324623b47765a57bd8ac6808fec8a6cd5430b579bc2cfb745343/freezer.state
	I1028 10:55:05.918398 1374255 api_server.go:204] freezer state: "THAWED"
	I1028 10:55:05.918426 1374255 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1028 10:55:05.926105 1374255 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1028 10:55:05.926140 1374255 status.go:463] ha-372281-m03 apiserver status = Running (err=<nil>)
	I1028 10:55:05.926166 1374255 status.go:176] ha-372281-m03 status: &{Name:ha-372281-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1028 10:55:05.926189 1374255 status.go:174] checking status of ha-372281-m04 ...
	I1028 10:55:05.926507 1374255 cli_runner.go:164] Run: docker container inspect ha-372281-m04 --format={{.State.Status}}
	I1028 10:55:05.943498 1374255 status.go:371] ha-372281-m04 host status = "Running" (err=<nil>)
	I1028 10:55:05.943527 1374255 host.go:66] Checking if "ha-372281-m04" exists ...
	I1028 10:55:05.943900 1374255 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-372281-m04
	I1028 10:55:05.961161 1374255 host.go:66] Checking if "ha-372281-m04" exists ...
	I1028 10:55:05.961568 1374255 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1028 10:55:05.961618 1374255 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-372281-m04
	I1028 10:55:05.978220 1374255 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40115 SSHKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/ha-372281-m04/id_rsa Username:docker}
	I1028 10:55:06.069438 1374255 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 10:55:06.082344 1374255 status.go:176] ha-372281-m04 status: &{Name:ha-372281-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (18.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-372281 node start m02 -v=7 --alsologtostderr: (17.043900036s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (18.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (135.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-372281 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-372281 -v=7 --alsologtostderr
E1028 10:55:46.363831 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/functional-355847/client.crt: no such file or directory" logger="UnhandledError"
E1028 10:55:46.370250 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/functional-355847/client.crt: no such file or directory" logger="UnhandledError"
E1028 10:55:46.381631 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/functional-355847/client.crt: no such file or directory" logger="UnhandledError"
E1028 10:55:46.403018 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/functional-355847/client.crt: no such file or directory" logger="UnhandledError"
E1028 10:55:46.444407 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/functional-355847/client.crt: no such file or directory" logger="UnhandledError"
E1028 10:55:46.526090 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/functional-355847/client.crt: no such file or directory" logger="UnhandledError"
E1028 10:55:46.687531 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/functional-355847/client.crt: no such file or directory" logger="UnhandledError"
E1028 10:55:47.009156 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/functional-355847/client.crt: no such file or directory" logger="UnhandledError"
E1028 10:55:47.650759 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/functional-355847/client.crt: no such file or directory" logger="UnhandledError"
E1028 10:55:48.932793 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/functional-355847/client.crt: no such file or directory" logger="UnhandledError"
E1028 10:55:51.494113 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/functional-355847/client.crt: no such file or directory" logger="UnhandledError"
E1028 10:55:56.616171 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/functional-355847/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-372281 -v=7 --alsologtostderr: (37.213483088s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-372281 --wait=true -v=7 --alsologtostderr
E1028 10:56:06.858331 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/functional-355847/client.crt: no such file or directory" logger="UnhandledError"
E1028 10:56:27.340143 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/functional-355847/client.crt: no such file or directory" logger="UnhandledError"
E1028 10:57:08.301940 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/functional-355847/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-372281 --wait=true -v=7 --alsologtostderr: (1m38.235709752s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-372281
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (135.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-372281 node delete m03 -v=7 --alsologtostderr: (9.75886214s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-372281 stop -v=7 --alsologtostderr: (35.95396926s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-372281 status -v=7 --alsologtostderr: exit status 7 (113.728242ms)

                                                
                                                
-- stdout --
	ha-372281
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-372281-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-372281-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 10:58:28.852277 1388599 out.go:345] Setting OutFile to fd 1 ...
	I1028 10:58:28.852425 1388599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 10:58:28.852436 1388599 out.go:358] Setting ErrFile to fd 2...
	I1028 10:58:28.852442 1388599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 10:58:28.852703 1388599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-1313708/.minikube/bin
	I1028 10:58:28.852896 1388599 out.go:352] Setting JSON to false
	I1028 10:58:28.852926 1388599 mustload.go:65] Loading cluster: ha-372281
	I1028 10:58:28.852970 1388599 notify.go:220] Checking for updates...
	I1028 10:58:28.853357 1388599 config.go:182] Loaded profile config "ha-372281": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1028 10:58:28.853374 1388599 status.go:174] checking status of ha-372281 ...
	I1028 10:58:28.853923 1388599 cli_runner.go:164] Run: docker container inspect ha-372281 --format={{.State.Status}}
	I1028 10:58:28.873968 1388599 status.go:371] ha-372281 host status = "Stopped" (err=<nil>)
	I1028 10:58:28.873991 1388599 status.go:384] host is not running, skipping remaining checks
	I1028 10:58:28.873999 1388599 status.go:176] ha-372281 status: &{Name:ha-372281 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1028 10:58:28.874031 1388599 status.go:174] checking status of ha-372281-m02 ...
	I1028 10:58:28.874366 1388599 cli_runner.go:164] Run: docker container inspect ha-372281-m02 --format={{.State.Status}}
	I1028 10:58:28.896574 1388599 status.go:371] ha-372281-m02 host status = "Stopped" (err=<nil>)
	I1028 10:58:28.896602 1388599 status.go:384] host is not running, skipping remaining checks
	I1028 10:58:28.896612 1388599 status.go:176] ha-372281-m02 status: &{Name:ha-372281-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1028 10:58:28.896631 1388599 status.go:174] checking status of ha-372281-m04 ...
	I1028 10:58:28.896933 1388599 cli_runner.go:164] Run: docker container inspect ha-372281-m04 --format={{.State.Status}}
	I1028 10:58:28.913372 1388599 status.go:371] ha-372281-m04 host status = "Stopped" (err=<nil>)
	I1028 10:58:28.913401 1388599 status.go:384] host is not running, skipping remaining checks
	I1028 10:58:28.913413 1388599 status.go:176] ha-372281-m04 status: &{Name:ha-372281-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (80.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-372281 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E1028 10:58:30.223307 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/functional-355847/client.crt: no such file or directory" logger="UnhandledError"
E1028 10:59:28.001978 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/addons-487046/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-372281 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m19.10140203s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (80.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (44.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-372281 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-372281 --control-plane -v=7 --alsologtostderr: (43.645368832s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-372281 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (44.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.95s)

                                                
                                    
x
+
TestJSONOutput/start/Command (91.26s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-398335 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E1028 11:00:46.362712 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/functional-355847/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:01:14.064803 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/functional-355847/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-398335 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m31.260941207s)
--- PASS: TestJSONOutput/start/Command (91.26s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-398335 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-398335 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.8s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-398335 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-398335 --output=json --user=testUser: (5.800631219s)
--- PASS: TestJSONOutput/stop/Command (5.80s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-358798 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-358798 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (90.420884ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"290ef1a7-350b-4dc8-bd76-7a7716168dc7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-358798] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"51e5eb72-6f68-4867-949d-09ddff4c5054","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19876"}}
	{"specversion":"1.0","id":"02972297-5f54-4a02-9a5e-4d0bdae98653","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"788fcf9a-516d-4e8f-8c47-8d74aa2af58e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19876-1313708/kubeconfig"}}
	{"specversion":"1.0","id":"d0798143-b4f2-4e7d-ae81-8743274a858d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-1313708/.minikube"}}
	{"specversion":"1.0","id":"4748009a-cc2f-4a7d-9ac6-d77a31939304","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"eb934da8-f230-476a-8b06-59be97810d4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a4084694-a5a2-4ec6-9f75-bce780258b2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-358798" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-358798
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (42.22s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-318545 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-318545 --network=: (40.072496331s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-318545" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-318545
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-318545: (2.126774274s)
--- PASS: TestKicCustomNetwork/create_custom_network (42.22s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (31.61s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-826969 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-826969 --network=bridge: (29.521119449s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-826969" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-826969
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-826969: (2.06809052s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (31.61s)

                                                
                                    
x
+
TestKicExistingNetwork (31.93s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1028 11:03:40.514415 1319098 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1028 11:03:40.533821 1319098 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1028 11:03:40.533903 1319098 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1028 11:03:40.533922 1319098 cli_runner.go:164] Run: docker network inspect existing-network
W1028 11:03:40.549292 1319098 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1028 11:03:40.549324 1319098 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1028 11:03:40.549341 1319098 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1028 11:03:40.549528 1319098 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1028 11:03:40.565162 1319098 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e8a2656e00eb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:39:ff:27:31} reservation:<nil>}
I1028 11:03:40.565521 1319098 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001da2f10}
I1028 11:03:40.565553 1319098 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1028 11:03:40.565606 1319098 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1028 11:03:40.634359 1319098 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-382551 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-382551 --network=existing-network: (29.805883966s)
helpers_test.go:175: Cleaning up "existing-network-382551" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-382551
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-382551: (1.968911979s)
I1028 11:04:12.429103 1319098 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (31.93s)

                                                
                                    
x
+
TestKicCustomSubnet (34.11s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-885627 --subnet=192.168.60.0/24
E1028 11:04:28.001299 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/addons-487046/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-885627 --subnet=192.168.60.0/24: (32.035495768s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-885627 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-885627" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-885627
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-885627: (2.052029719s)
--- PASS: TestKicCustomSubnet (34.11s)

                                                
                                    
x
+
TestKicStaticIP (34.48s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-935714 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-935714 --static-ip=192.168.200.200: (32.288218432s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-935714 ip
helpers_test.go:175: Cleaning up "static-ip-935714" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-935714
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-935714: (2.049641429s)
--- PASS: TestKicStaticIP (34.48s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (67.2s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-325323 --driver=docker  --container-runtime=containerd
E1028 11:05:46.368237 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/functional-355847/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-325323 --driver=docker  --container-runtime=containerd: (28.506601316s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-327924 --driver=docker  --container-runtime=containerd
E1028 11:05:51.067772 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/addons-487046/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-327924 --driver=docker  --container-runtime=containerd: (33.058553861s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-325323
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-327924
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-327924" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-327924
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-327924: (2.02260481s)
helpers_test.go:175: Cleaning up "first-325323" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-325323
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-325323: (2.209876186s)
--- PASS: TestMinikubeProfile (67.20s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.75s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-431450 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-431450 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.747589751s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-431450 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-433403 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-433403 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.161296534s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-433403 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-431450 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-431450 --alsologtostderr -v=5: (1.612558254s)
--- PASS: TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-433403 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-433403
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-433403: (1.197815958s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.13s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-433403
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-433403: (6.132651628s)
--- PASS: TestMountStart/serial/RestartStopped (7.13s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-433403 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (69.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-191781 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-191781 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m8.808702217s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (69.32s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (20.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-191781 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-191781 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-191781 -- rollout status deployment/busybox: (18.844391278s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-191781 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-191781 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-191781 -- exec busybox-7dff88458-5j4nt -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-191781 -- exec busybox-7dff88458-5plnc -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-191781 -- exec busybox-7dff88458-5j4nt -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-191781 -- exec busybox-7dff88458-5plnc -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-191781 -- exec busybox-7dff88458-5j4nt -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-191781 -- exec busybox-7dff88458-5plnc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (20.71s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-191781 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-191781 -- exec busybox-7dff88458-5j4nt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-191781 -- exec busybox-7dff88458-5j4nt -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-191781 -- exec busybox-7dff88458-5plnc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-191781 -- exec busybox-7dff88458-5plnc -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.00s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (19.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-191781 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-191781 -v 3 --alsologtostderr: (18.629784335s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (19.28s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-191781 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.67s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 cp testdata/cp-test.txt multinode-191781:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 ssh -n multinode-191781 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 cp multinode-191781:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4009501462/001/cp-test_multinode-191781.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 ssh -n multinode-191781 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 cp multinode-191781:/home/docker/cp-test.txt multinode-191781-m02:/home/docker/cp-test_multinode-191781_multinode-191781-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 ssh -n multinode-191781 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 ssh -n multinode-191781-m02 "sudo cat /home/docker/cp-test_multinode-191781_multinode-191781-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 cp multinode-191781:/home/docker/cp-test.txt multinode-191781-m03:/home/docker/cp-test_multinode-191781_multinode-191781-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 ssh -n multinode-191781 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 ssh -n multinode-191781-m03 "sudo cat /home/docker/cp-test_multinode-191781_multinode-191781-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 cp testdata/cp-test.txt multinode-191781-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 ssh -n multinode-191781-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 cp multinode-191781-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4009501462/001/cp-test_multinode-191781-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 ssh -n multinode-191781-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 cp multinode-191781-m02:/home/docker/cp-test.txt multinode-191781:/home/docker/cp-test_multinode-191781-m02_multinode-191781.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 ssh -n multinode-191781-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 ssh -n multinode-191781 "sudo cat /home/docker/cp-test_multinode-191781-m02_multinode-191781.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 cp multinode-191781-m02:/home/docker/cp-test.txt multinode-191781-m03:/home/docker/cp-test_multinode-191781-m02_multinode-191781-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 ssh -n multinode-191781-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 ssh -n multinode-191781-m03 "sudo cat /home/docker/cp-test_multinode-191781-m02_multinode-191781-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 cp testdata/cp-test.txt multinode-191781-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 ssh -n multinode-191781-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 cp multinode-191781-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4009501462/001/cp-test_multinode-191781-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 ssh -n multinode-191781-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 cp multinode-191781-m03:/home/docker/cp-test.txt multinode-191781:/home/docker/cp-test_multinode-191781-m03_multinode-191781.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 ssh -n multinode-191781-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 ssh -n multinode-191781 "sudo cat /home/docker/cp-test_multinode-191781-m03_multinode-191781.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 cp multinode-191781-m03:/home/docker/cp-test.txt multinode-191781-m02:/home/docker/cp-test_multinode-191781-m03_multinode-191781-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 ssh -n multinode-191781-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 ssh -n multinode-191781-m02 "sudo cat /home/docker/cp-test_multinode-191781-m03_multinode-191781-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.74s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-191781 node stop m03: (1.229665699s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-191781 status: exit status 7 (499.570516ms)

                                                
                                                
-- stdout --
	multinode-191781
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-191781-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-191781-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-191781 status --alsologtostderr: exit status 7 (501.145029ms)

                                                
                                                
-- stdout --
	multinode-191781
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-191781-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-191781-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 11:09:01.524647 1442115 out.go:345] Setting OutFile to fd 1 ...
	I1028 11:09:01.524857 1442115 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:09:01.524884 1442115 out.go:358] Setting ErrFile to fd 2...
	I1028 11:09:01.524904 1442115 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:09:01.525209 1442115 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-1313708/.minikube/bin
	I1028 11:09:01.525434 1442115 out.go:352] Setting JSON to false
	I1028 11:09:01.525486 1442115 mustload.go:65] Loading cluster: multinode-191781
	I1028 11:09:01.525598 1442115 notify.go:220] Checking for updates...
	I1028 11:09:01.526067 1442115 config.go:182] Loaded profile config "multinode-191781": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1028 11:09:01.526111 1442115 status.go:174] checking status of multinode-191781 ...
	I1028 11:09:01.527021 1442115 cli_runner.go:164] Run: docker container inspect multinode-191781 --format={{.State.Status}}
	I1028 11:09:01.548407 1442115 status.go:371] multinode-191781 host status = "Running" (err=<nil>)
	I1028 11:09:01.548463 1442115 host.go:66] Checking if "multinode-191781" exists ...
	I1028 11:09:01.548771 1442115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-191781
	I1028 11:09:01.579551 1442115 host.go:66] Checking if "multinode-191781" exists ...
	I1028 11:09:01.579988 1442115 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1028 11:09:01.580042 1442115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-191781
	I1028 11:09:01.603321 1442115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40220 SSHKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/multinode-191781/id_rsa Username:docker}
	I1028 11:09:01.696473 1442115 ssh_runner.go:195] Run: systemctl --version
	I1028 11:09:01.700555 1442115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:09:01.712298 1442115 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 11:09:01.756862 1442115 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-10-28 11:09:01.747238761 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1028 11:09:01.757441 1442115 kubeconfig.go:125] found "multinode-191781" server: "https://192.168.67.2:8443"
	I1028 11:09:01.757473 1442115 api_server.go:166] Checking apiserver status ...
	I1028 11:09:01.757515 1442115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:09:01.768704 1442115 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1427/cgroup
	I1028 11:09:01.777803 1442115 api_server.go:182] apiserver freezer: "6:freezer:/docker/dbf236adbc15498ed71ba1ec25843aa6ad7e6ee80e80fd98915f309ccdd0c8a5/kubepods/burstable/podfcd6540e3643bf2da9b8e99f46a5bbee/575b246e913849677eb4b1c57eec264066db59f617a892a410c819195fdb84e1"
	I1028 11:09:01.777873 1442115 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/dbf236adbc15498ed71ba1ec25843aa6ad7e6ee80e80fd98915f309ccdd0c8a5/kubepods/burstable/podfcd6540e3643bf2da9b8e99f46a5bbee/575b246e913849677eb4b1c57eec264066db59f617a892a410c819195fdb84e1/freezer.state
	I1028 11:09:01.786215 1442115 api_server.go:204] freezer state: "THAWED"
	I1028 11:09:01.786244 1442115 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1028 11:09:01.794001 1442115 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1028 11:09:01.794038 1442115 status.go:463] multinode-191781 apiserver status = Running (err=<nil>)
	I1028 11:09:01.794053 1442115 status.go:176] multinode-191781 status: &{Name:multinode-191781 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1028 11:09:01.794073 1442115 status.go:174] checking status of multinode-191781-m02 ...
	I1028 11:09:01.794372 1442115 cli_runner.go:164] Run: docker container inspect multinode-191781-m02 --format={{.State.Status}}
	I1028 11:09:01.810988 1442115 status.go:371] multinode-191781-m02 host status = "Running" (err=<nil>)
	I1028 11:09:01.811012 1442115 host.go:66] Checking if "multinode-191781-m02" exists ...
	I1028 11:09:01.811345 1442115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-191781-m02
	I1028 11:09:01.830942 1442115 host.go:66] Checking if "multinode-191781-m02" exists ...
	I1028 11:09:01.831267 1442115 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1028 11:09:01.831325 1442115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-191781-m02
	I1028 11:09:01.848440 1442115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40225 SSHKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/multinode-191781-m02/id_rsa Username:docker}
	I1028 11:09:01.936746 1442115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:09:01.948315 1442115 status.go:176] multinode-191781-m02 status: &{Name:multinode-191781-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1028 11:09:01.948350 1442115 status.go:174] checking status of multinode-191781-m03 ...
	I1028 11:09:01.948657 1442115 cli_runner.go:164] Run: docker container inspect multinode-191781-m03 --format={{.State.Status}}
	I1028 11:09:01.965689 1442115 status.go:371] multinode-191781-m03 host status = "Stopped" (err=<nil>)
	I1028 11:09:01.965715 1442115 status.go:384] host is not running, skipping remaining checks
	I1028 11:09:01.965722 1442115 status.go:176] multinode-191781-m03 status: &{Name:multinode-191781-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.23s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-191781 node start m03 -v=7 --alsologtostderr: (9.158947467s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.90s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (97.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-191781
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-191781
E1028 11:09:28.001418 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/addons-487046/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-191781: (25.019586756s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-191781 --wait=true -v=8 --alsologtostderr
E1028 11:10:46.363757 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/functional-355847/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-191781 --wait=true -v=8 --alsologtostderr: (1m12.155882335s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-191781
--- PASS: TestMultiNode/serial/RestartKeepsNodes (97.31s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-191781 node delete m03: (4.851467068s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.51s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-191781 stop: (23.798615365s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-191781 status: exit status 7 (102.130418ms)

                                                
                                                
-- stdout --
	multinode-191781
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-191781-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-191781 status --alsologtostderr: exit status 7 (100.684774ms)

                                                
                                                
-- stdout --
	multinode-191781
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-191781-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 11:11:18.640776 1450564 out.go:345] Setting OutFile to fd 1 ...
	I1028 11:11:18.640964 1450564 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:11:18.640993 1450564 out.go:358] Setting ErrFile to fd 2...
	I1028 11:11:18.641013 1450564 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:11:18.641264 1450564 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-1313708/.minikube/bin
	I1028 11:11:18.641470 1450564 out.go:352] Setting JSON to false
	I1028 11:11:18.641523 1450564 mustload.go:65] Loading cluster: multinode-191781
	I1028 11:11:18.641617 1450564 notify.go:220] Checking for updates...
	I1028 11:11:18.641985 1450564 config.go:182] Loaded profile config "multinode-191781": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1028 11:11:18.642030 1450564 status.go:174] checking status of multinode-191781 ...
	I1028 11:11:18.642536 1450564 cli_runner.go:164] Run: docker container inspect multinode-191781 --format={{.State.Status}}
	I1028 11:11:18.660794 1450564 status.go:371] multinode-191781 host status = "Stopped" (err=<nil>)
	I1028 11:11:18.660814 1450564 status.go:384] host is not running, skipping remaining checks
	I1028 11:11:18.660825 1450564 status.go:176] multinode-191781 status: &{Name:multinode-191781 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1028 11:11:18.660858 1450564 status.go:174] checking status of multinode-191781-m02 ...
	I1028 11:11:18.661153 1450564 cli_runner.go:164] Run: docker container inspect multinode-191781-m02 --format={{.State.Status}}
	I1028 11:11:18.691171 1450564 status.go:371] multinode-191781-m02 host status = "Stopped" (err=<nil>)
	I1028 11:11:18.691192 1450564 status.go:384] host is not running, skipping remaining checks
	I1028 11:11:18.691198 1450564 status.go:176] multinode-191781-m02 status: &{Name:multinode-191781-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.00s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (53.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-191781 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E1028 11:12:09.426935 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/functional-355847/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-191781 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (52.976839378s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-191781 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (53.64s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (31.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-191781
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-191781-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-191781-m02 --driver=docker  --container-runtime=containerd: exit status 14 (88.97443ms)

                                                
                                                
-- stdout --
	* [multinode-191781-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19876-1313708/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-1313708/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-191781-m02' is duplicated with machine name 'multinode-191781-m02' in profile 'multinode-191781'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-191781-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-191781-m03 --driver=docker  --container-runtime=containerd: (29.567722413s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-191781
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-191781: exit status 80 (335.131368ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-191781 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-191781-m03 already exists in multinode-191781-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_6.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-191781-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-191781-m03: (1.937755377s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (31.98s)

                                                
                                    
x
+
TestPreload (107.75s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-810394 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-810394 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m11.564339946s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-810394 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-810394 image pull gcr.io/k8s-minikube/busybox: (2.007204546s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-810394
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-810394: (12.046178505s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-810394 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E1028 11:14:28.001307 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/addons-487046/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-810394 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (19.328407161s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-810394 image list
helpers_test.go:175: Cleaning up "test-preload-810394" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-810394
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-810394: (2.445715201s)
--- PASS: TestPreload (107.75s)

                                                
                                    
x
+
TestScheduledStopUnix (107.94s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-871118 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-871118 --memory=2048 --driver=docker  --container-runtime=containerd: (31.27134132s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-871118 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-871118 -n scheduled-stop-871118
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-871118 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1028 11:15:07.771936 1319098 retry.go:31] will retry after 110.888µs: open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/scheduled-stop-871118/pid: no such file or directory
I1028 11:15:07.773032 1319098 retry.go:31] will retry after 78.371µs: open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/scheduled-stop-871118/pid: no such file or directory
I1028 11:15:07.774429 1319098 retry.go:31] will retry after 206.591µs: open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/scheduled-stop-871118/pid: no such file or directory
I1028 11:15:07.777384 1319098 retry.go:31] will retry after 265.173µs: open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/scheduled-stop-871118/pid: no such file or directory
I1028 11:15:07.778486 1319098 retry.go:31] will retry after 381.274µs: open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/scheduled-stop-871118/pid: no such file or directory
I1028 11:15:07.779599 1319098 retry.go:31] will retry after 397.183µs: open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/scheduled-stop-871118/pid: no such file or directory
I1028 11:15:07.780762 1319098 retry.go:31] will retry after 768.791µs: open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/scheduled-stop-871118/pid: no such file or directory
I1028 11:15:07.781874 1319098 retry.go:31] will retry after 2.523363ms: open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/scheduled-stop-871118/pid: no such file or directory
I1028 11:15:07.785063 1319098 retry.go:31] will retry after 2.388476ms: open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/scheduled-stop-871118/pid: no such file or directory
I1028 11:15:07.788267 1319098 retry.go:31] will retry after 3.029648ms: open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/scheduled-stop-871118/pid: no such file or directory
I1028 11:15:07.791415 1319098 retry.go:31] will retry after 7.008084ms: open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/scheduled-stop-871118/pid: no such file or directory
I1028 11:15:07.798570 1319098 retry.go:31] will retry after 5.419663ms: open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/scheduled-stop-871118/pid: no such file or directory
I1028 11:15:07.804768 1319098 retry.go:31] will retry after 14.91821ms: open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/scheduled-stop-871118/pid: no such file or directory
I1028 11:15:07.820001 1319098 retry.go:31] will retry after 18.962676ms: open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/scheduled-stop-871118/pid: no such file or directory
I1028 11:15:07.839196 1319098 retry.go:31] will retry after 31.171578ms: open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/scheduled-stop-871118/pid: no such file or directory
I1028 11:15:07.871419 1319098 retry.go:31] will retry after 49.459131ms: open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/scheduled-stop-871118/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-871118 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-871118 -n scheduled-stop-871118
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-871118
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-871118 --schedule 15s
E1028 11:15:46.363029 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/functional-355847/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-871118
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-871118: exit status 7 (71.061066ms)

                                                
                                                
-- stdout --
	scheduled-stop-871118
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-871118 -n scheduled-stop-871118
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-871118 -n scheduled-stop-871118: exit status 7 (75.3167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-871118" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-871118
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-871118: (5.140763596s)
--- PASS: TestScheduledStopUnix (107.94s)

                                                
                                    
x
+
TestInsufficientStorage (12.83s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-543255 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-543255 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.399666994s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1a968f52-7ac2-41be-b56d-0ba8c70da07b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-543255] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3f965502-a3c2-4150-9ee2-0c97ae1444a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19876"}}
	{"specversion":"1.0","id":"1672c1d5-8311-4001-8c35-dfd2832fb98d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e196783c-a835-4345-84d6-7ee5007b2a29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19876-1313708/kubeconfig"}}
	{"specversion":"1.0","id":"c009e796-515b-472b-bcdc-25b21c4ce212","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-1313708/.minikube"}}
	{"specversion":"1.0","id":"f04353db-f26b-4631-b435-a8405c2648f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"ec6cf7bf-73fc-4516-ac74-c8ad20aea11e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ea15f67e-9509-425c-bb7b-4ff747931fe5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"86c6b4dd-9492-426c-a22f-190032b7a52a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"49bc5a72-c72e-4596-b4d5-be9f86a14481","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ab3befab-76d4-4b2e-97e3-8b082c4b8c2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"caeb70ef-f572-49ef-8925-f68b9d4d408b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-543255\" primary control-plane node in \"insufficient-storage-543255\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"5b0245f9-ef3a-4ffc-bdb8-22a062f14410","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1729876044-19868 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"8d08d49c-9adf-48fe-a830-bf0cd27964c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"2ed0c9c8-4588-4485-8dcb-1eb291bd8223","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-543255 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-543255 --output=json --layout=cluster: exit status 7 (267.35486ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-543255","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-543255","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 11:16:34.618706 1469182 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-543255" does not appear in /home/jenkins/minikube-integration/19876-1313708/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-543255 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-543255 --output=json --layout=cluster: exit status 7 (283.748056ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-543255","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-543255","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 11:16:34.903852 1469242 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-543255" does not appear in /home/jenkins/minikube-integration/19876-1313708/kubeconfig
	E1028 11:16:34.913865 1469242 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/insufficient-storage-543255/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-543255" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-543255
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-543255: (1.881438213s)
--- PASS: TestInsufficientStorage (12.83s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (76.91s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2556604375 start -p running-upgrade-498590 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2556604375 start -p running-upgrade-498590 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (39.026033952s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-498590 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-498590 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (34.298208621s)
helpers_test.go:175: Cleaning up "running-upgrade-498590" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-498590
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-498590: (2.904502194s)
--- PASS: TestRunningBinaryUpgrade (76.91s)

                                                
                                    
x
+
TestKubernetesUpgrade (108.95s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-475680 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-475680 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m4.457187491s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-475680
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-475680: (1.619387013s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-475680 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-475680 status --format={{.Host}}: exit status 7 (110.615835ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-475680 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-475680 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (32.885114545s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-475680 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-475680 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-475680 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (129.696409ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-475680] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19876-1313708/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-1313708/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-475680
	    minikube start -p kubernetes-upgrade-475680 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4756802 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-475680 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-475680 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1028 11:19:28.008231 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/addons-487046/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-475680 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.903710123s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-475680" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-475680
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-475680: (2.700702781s)
--- PASS: TestKubernetesUpgrade (108.95s)

                                                
                                    
x
+
TestMissingContainerUpgrade (177.37s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.126201815 start -p missing-upgrade-628300 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.126201815 start -p missing-upgrade-628300 --memory=2200 --driver=docker  --container-runtime=containerd: (1m40.017417605s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-628300
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-628300: (10.317491729s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-628300
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-628300 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-628300 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m4.161177245s)
helpers_test.go:175: Cleaning up "missing-upgrade-628300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-628300
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-628300: (2.146313021s)
--- PASS: TestMissingContainerUpgrade (177.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-006408 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-006408 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (104.758439ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-006408] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19876-1313708/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-1313708/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-006408 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-006408 --driver=docker  --container-runtime=containerd: (38.912970939s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-006408 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-006408 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-006408 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.514521017s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-006408 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-006408 status -o json: exit status 2 (317.205265ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-006408","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-006408
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-006408: (2.025927313s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-006408 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-006408 --no-kubernetes --driver=docker  --container-runtime=containerd: (9.317191137s)
--- PASS: TestNoKubernetes/serial/Start (9.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-006408 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-006408 "sudo systemctl is-active --quiet service kubelet": exit status 1 (253.737069ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-006408
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-006408: (1.219636917s)
--- PASS: TestNoKubernetes/serial/Stop (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-006408 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-006408 --driver=docker  --container-runtime=containerd: (6.382818998s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-006408 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-006408 "sudo systemctl is-active --quiet service kubelet": exit status 1 (267.139966ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.71s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (133.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.550886597 start -p stopped-upgrade-261143 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.550886597 start -p stopped-upgrade-261143 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (54.580760512s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.550886597 -p stopped-upgrade-261143 stop
E1028 11:20:46.363152 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/functional-355847/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.550886597 -p stopped-upgrade-261143 stop: (20.42999596s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-261143 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-261143 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (58.806157381s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (133.82s)

                                                
                                    
x
+
TestPause/serial/Start (92.53s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-615001 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-615001 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m32.534330446s)
--- PASS: TestPause/serial/Start (92.53s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-261143
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-261143: (1.181201153s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.18s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.37s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-615001 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-615001 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.341226928s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-721163 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-721163 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (185.503262ms)

                                                
                                                
-- stdout --
	* [false-721163] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19876
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19876-1313708/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-1313708/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 11:22:29.349678 1505068 out.go:345] Setting OutFile to fd 1 ...
	I1028 11:22:29.349886 1505068 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:22:29.349914 1505068 out.go:358] Setting ErrFile to fd 2...
	I1028 11:22:29.349932 1505068 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:22:29.350202 1505068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-1313708/.minikube/bin
	I1028 11:22:29.350652 1505068 out.go:352] Setting JSON to false
	I1028 11:22:29.351819 1505068 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":147880,"bootTime":1729966670,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1028 11:22:29.351922 1505068 start.go:139] virtualization:  
	I1028 11:22:29.357173 1505068 out.go:177] * [false-721163] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1028 11:22:29.359806 1505068 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 11:22:29.359967 1505068 notify.go:220] Checking for updates...
	I1028 11:22:29.365482 1505068 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 11:22:29.368056 1505068 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19876-1313708/kubeconfig
	I1028 11:22:29.370639 1505068 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-1313708/.minikube
	I1028 11:22:29.373382 1505068 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1028 11:22:29.375915 1505068 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 11:22:29.379093 1505068 config.go:182] Loaded profile config "pause-615001": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1028 11:22:29.379191 1505068 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 11:22:29.406584 1505068 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1028 11:22:29.406737 1505068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 11:22:29.466344 1505068 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-28 11:22:29.456352372 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1028 11:22:29.466454 1505068 docker.go:318] overlay module found
	I1028 11:22:29.469139 1505068 out.go:177] * Using the docker driver based on user configuration
	I1028 11:22:29.471790 1505068 start.go:297] selected driver: docker
	I1028 11:22:29.471806 1505068 start.go:901] validating driver "docker" against <nil>
	I1028 11:22:29.471820 1505068 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 11:22:29.475172 1505068 out.go:201] 
	W1028 11:22:29.477930 1505068 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1028 11:22:29.480542 1505068 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-721163 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-721163

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-721163

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-721163

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-721163

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-721163

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-721163

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-721163

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-721163

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-721163

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-721163

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721163"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721163"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721163"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-721163

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721163"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721163"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-721163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-721163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-721163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-721163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-721163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-721163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-721163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-721163" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721163"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721163"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721163"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721163"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721163"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-721163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-721163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-721163" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721163"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721163"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721163"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721163"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721163"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19876-1313708/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 28 Oct 2024 11:22:28 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-615001
contexts:
- context:
cluster: pause-615001
extensions:
- extension:
last-update: Mon, 28 Oct 2024 11:22:28 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-615001
name: pause-615001
current-context: pause-615001
kind: Config
preferences: {}
users:
- name: pause-615001
user:
client-certificate: /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/pause-615001/client.crt
client-key: /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/pause-615001/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-721163

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721163"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721163"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721163"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721163"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721163"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721163"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721163"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721163"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721163"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721163"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721163"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721163"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721163"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721163"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721163"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721163"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721163"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-721163"

                                                
                                                
----------------------- debugLogs end: false-721163 [took: 4.720384384s] --------------------------------
helpers_test.go:175: Cleaning up "false-721163" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-721163
--- PASS: TestNetworkPlugins/group/false (5.10s)

                                                
                                    
x
+
TestPause/serial/Pause (0.88s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-615001 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.88s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.43s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-615001 --output=json --layout=cluster
E1028 11:22:31.069972 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/addons-487046/client.crt: no such file or directory" logger="UnhandledError"
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-615001 --output=json --layout=cluster: exit status 2 (425.200022ms)

                                                
                                                
-- stdout --
	{"Name":"pause-615001","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-615001","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.43s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.81s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-615001 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.81s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.09s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-615001 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-615001 --alsologtostderr -v=5: (1.089375473s)
--- PASS: TestPause/serial/PauseAgain (1.09s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.82s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-615001 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-615001 --alsologtostderr -v=5: (2.817691517s)
--- PASS: TestPause/serial/DeletePaused (2.82s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.17s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-615001
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-615001: exit status 1 (22.122901ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-615001: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (142.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-674802 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E1028 11:24:28.001274 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/addons-487046/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:25:46.363412 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/functional-355847/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-674802 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m22.233138891s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (142.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-674802 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bfd161a2-8dba-441c-a74c-ca0fe48aea08] Pending
helpers_test.go:344: "busybox" [bfd161a2-8dba-441c-a74c-ca0fe48aea08] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [bfd161a2-8dba-441c-a74c-ca0fe48aea08] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.013180237s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-674802 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-674802 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-674802 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.39770997s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-674802 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-674802 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-674802 --alsologtostderr -v=3: (13.051104631s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (63.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-196138 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-196138 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (1m3.797952391s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (63.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-674802 -n old-k8s-version-674802
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-674802 -n old-k8s-version-674802: exit status 7 (111.391957ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-674802 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-196138 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1d02a9f5-02a7-4995-ac42-3f879d30cca0] Pending
helpers_test.go:344: "busybox" [1d02a9f5-02a7-4995-ac42-3f879d30cca0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1d02a9f5-02a7-4995-ac42-3f879d30cca0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.00336011s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-196138 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-196138 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-196138 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.135390339s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-196138 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-196138 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-196138 --alsologtostderr -v=3: (12.057007224s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-196138 -n no-preload-196138
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-196138 -n no-preload-196138: exit status 7 (82.867637ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-196138 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (266.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-196138 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
E1028 11:28:49.428805 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/functional-355847/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:29:28.001018 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/addons-487046/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:30:46.363305 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/functional-355847/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-196138 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (4m26.40614499s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-196138 -n no-preload-196138
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (266.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-q2jkl" [acea834e-5dbf-4d2c-b115-3cd07fb34288] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004865203s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-q2jkl" [acea834e-5dbf-4d2c-b115-3cd07fb34288] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003582139s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-196138 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-196138 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-196138 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-196138 -n no-preload-196138
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-196138 -n no-preload-196138: exit status 2 (330.745123ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-196138 -n no-preload-196138
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-196138 -n no-preload-196138: exit status 2 (333.954784ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-196138 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-196138 -n no-preload-196138
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-196138 -n no-preload-196138
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (94.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-542883 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-542883 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (1m34.526598441s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (94.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-v2szp" [66e983d2-1cb0-425c-9df7-c4f85c55ad6a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.010238662s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-v2szp" [66e983d2-1cb0-425c-9df7-c4f85c55ad6a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004960331s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-674802 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-674802 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-674802 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-674802 --alsologtostderr -v=1: (1.056351485s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-674802 -n old-k8s-version-674802
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-674802 -n old-k8s-version-674802: exit status 2 (452.561657ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-674802 -n old-k8s-version-674802
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-674802 -n old-k8s-version-674802: exit status 2 (339.10831ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-674802 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-674802 -n old-k8s-version-674802
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-674802 -n old-k8s-version-674802
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-355699 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-355699 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (51.075525996s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-355699 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [29f76855-ea05-4860-98d6-c6edb5ac2abe] Pending
helpers_test.go:344: "busybox" [29f76855-ea05-4860-98d6-c6edb5ac2abe] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [29f76855-ea05-4860-98d6-c6edb5ac2abe] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003878866s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-355699 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-542883 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c4209abb-8c40-4d93-a98b-847cc04b9135] Pending
helpers_test.go:344: "busybox" [c4209abb-8c40-4d93-a98b-847cc04b9135] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c4209abb-8c40-4d93-a98b-847cc04b9135] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004522719s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-542883 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-355699 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-355699 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-355699 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-355699 --alsologtostderr -v=3: (12.026864528s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-542883 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-542883 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-542883 --alsologtostderr -v=3
E1028 11:34:28.001549 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/addons-487046/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-542883 --alsologtostderr -v=3: (12.0461704s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-355699 -n default-k8s-diff-port-355699
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-355699 -n default-k8s-diff-port-355699: exit status 7 (69.764727ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-355699 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (293.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-355699 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-355699 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (4m52.668504407s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-355699 -n default-k8s-diff-port-355699
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (293.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-542883 -n embed-certs-542883
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-542883 -n embed-certs-542883: exit status 7 (129.187437ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-542883 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (305.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-542883 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
E1028 11:35:46.363026 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/functional-355847/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:36:19.446243 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/old-k8s-version-674802/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:36:19.452598 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/old-k8s-version-674802/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:36:19.463959 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/old-k8s-version-674802/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:36:19.485249 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/old-k8s-version-674802/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:36:19.526612 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/old-k8s-version-674802/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:36:19.608045 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/old-k8s-version-674802/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:36:19.769716 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/old-k8s-version-674802/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:36:20.092037 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/old-k8s-version-674802/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:36:20.733962 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/old-k8s-version-674802/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:36:22.015379 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/old-k8s-version-674802/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:36:24.576705 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/old-k8s-version-674802/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:36:29.698349 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/old-k8s-version-674802/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:36:39.940259 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/old-k8s-version-674802/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:37:00.422081 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/old-k8s-version-674802/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:37:35.435535 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/no-preload-196138/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:37:35.441998 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/no-preload-196138/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:37:35.453413 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/no-preload-196138/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:37:35.474767 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/no-preload-196138/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:37:35.516165 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/no-preload-196138/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:37:35.597569 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/no-preload-196138/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:37:35.759113 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/no-preload-196138/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:37:36.080815 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/no-preload-196138/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:37:36.722917 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/no-preload-196138/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:37:38.005023 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/no-preload-196138/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:37:40.566902 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/no-preload-196138/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:37:41.383404 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/old-k8s-version-674802/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:37:45.688216 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/no-preload-196138/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:37:55.930401 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/no-preload-196138/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:38:16.411720 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/no-preload-196138/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:38:57.373874 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/no-preload-196138/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:39:03.305089 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/old-k8s-version-674802/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:39:11.071868 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/addons-487046/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-542883 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (5m4.715158616s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-542883 -n embed-certs-542883
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (305.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-8jprl" [1cef2fb3-d67c-439d-9571-5132ea949898] Running
E1028 11:39:28.001236 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/addons-487046/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004499355s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-8jprl" [1cef2fb3-d67c-439d-9571-5132ea949898] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00380283s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-355699 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-355699 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-355699 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-355699 -n default-k8s-diff-port-355699
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-355699 -n default-k8s-diff-port-355699: exit status 2 (303.746916ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-355699 -n default-k8s-diff-port-355699
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-355699 -n default-k8s-diff-port-355699: exit status 2 (323.358695ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-355699 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-355699 -n default-k8s-diff-port-355699
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-355699 -n default-k8s-diff-port-355699
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (40.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-069009 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-069009 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (40.911194808s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (40.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-mzk7l" [4b0a138a-d4f4-4984-a97a-4c1e5b1bab89] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004746431s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-mzk7l" [4b0a138a-d4f4-4984-a97a-4c1e5b1bab89] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004333491s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-542883 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-542883 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-542883 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-542883 --alsologtostderr -v=1: (1.083864306s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-542883 -n embed-certs-542883
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-542883 -n embed-certs-542883: exit status 2 (374.147505ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-542883 -n embed-certs-542883
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-542883 -n embed-certs-542883: exit status 2 (388.02257ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-542883 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-542883 -n embed-certs-542883
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-542883 -n embed-certs-542883
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (100.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-721163 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E1028 11:40:19.295243 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/no-preload-196138/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-721163 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m40.042873701s)
--- PASS: TestNetworkPlugins/group/auto/Start (100.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-069009 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-069009 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.416259491s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-069009 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-069009 --alsologtostderr -v=3: (1.351990585s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-069009 -n newest-cni-069009
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-069009 -n newest-cni-069009: exit status 7 (93.88099ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-069009 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (22.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-069009 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
E1028 11:40:46.363121 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/functional-355847/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-069009 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (22.143814392s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-069009 -n newest-cni-069009
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (22.59s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-069009 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-069009 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-069009 -n newest-cni-069009
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-069009 -n newest-cni-069009: exit status 2 (334.418929ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-069009 -n newest-cni-069009
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-069009 -n newest-cni-069009: exit status 2 (312.60493ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-069009 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-069009 -n newest-cni-069009
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-069009 -n newest-cni-069009
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.93s)
E1028 11:46:19.446575 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/old-k8s-version-674802/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (88.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-721163 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E1028 11:41:19.446397 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/old-k8s-version-674802/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-721163 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m28.957877845s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (88.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-721163 "pgrep -a kubelet"
I1028 11:41:43.920407 1319098 config.go:182] Loaded profile config "auto-721163": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-721163 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-md8rj" [c835fb1e-9801-4cf1-bb33-ff5b290f7b19] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1028 11:41:47.146381 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/old-k8s-version-674802/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-md8rj" [c835fb1e-9801-4cf1-bb33-ff5b290f7b19] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004051807s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-721163 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-721163 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-721163 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (63.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-721163 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-721163 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m3.233149984s)
--- PASS: TestNetworkPlugins/group/calico/Start (63.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-58fm4" [937eb899-f83c-4e95-8f80-5c0fdf37139c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003720141s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-721163 "pgrep -a kubelet"
I1028 11:42:31.516252 1319098 config.go:182] Loaded profile config "kindnet-721163": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-721163 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-lgwwx" [eb7b14af-bb2e-4dc3-a7aa-3febcd8e450d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1028 11:42:35.435758 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/no-preload-196138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-lgwwx" [eb7b14af-bb2e-4dc3-a7aa-3febcd8e450d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003779704s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-721163 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-721163 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-721163 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (52.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-721163 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-721163 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (52.454733139s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (52.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-jqfmq" [613548ad-f430-44b6-a360-83147ae52318] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006250549s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-721163 "pgrep -a kubelet"
I1028 11:43:23.945991 1319098 config.go:182] Loaded profile config "calico-721163": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-721163 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-tcr4k" [0fe47dcf-9b1e-4dca-8b6c-549e506837ad] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-tcr4k" [0fe47dcf-9b1e-4dca-8b6c-549e506837ad] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.007500691s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-721163 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-721163 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-721163 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-721163 "pgrep -a kubelet"
I1028 11:43:59.512805 1319098 config.go:182] Loaded profile config "custom-flannel-721163": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-721163 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-k9dqk" [1521fec6-6667-4989-8435-a1cf32442955] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-k9dqk" [1521fec6-6667-4989-8435-a1cf32442955] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004894298s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (72.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-721163 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-721163 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m12.712279809s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (72.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-721163 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-721163 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-721163 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (48.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-721163 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E1028 11:44:53.252565 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/default-k8s-diff-port-355699/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-721163 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (48.684311724s)
--- PASS: TestNetworkPlugins/group/flannel/Start (48.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-721163 "pgrep -a kubelet"
I1028 11:45:13.663920 1319098 config.go:182] Loaded profile config "enable-default-cni-721163": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-721163 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ljh2q" [d66c3183-43ab-42f0-a842-c5d307a86366] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ljh2q" [d66c3183-43ab-42f0-a842-c5d307a86366] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004128721s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-721163 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-721163 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-2qdvm" [85d0e62b-8f83-4302-a0ff-d6b8fe9018d5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004338189s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-721163 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-721163 "pgrep -a kubelet"
E1028 11:45:29.430472 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/functional-355847/client.crt: no such file or directory" logger="UnhandledError"
I1028 11:45:29.611840 1319098 config.go:182] Loaded profile config "flannel-721163": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-721163 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-kkxhj" [c2efff59-0fc6-429a-a66a-a43eeb10b0b1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1028 11:45:34.214116 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/default-k8s-diff-port-355699/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-kkxhj" [c2efff59-0fc6-429a-a66a-a43eeb10b0b1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003737489s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-721163 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-721163 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-721163 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (47.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-721163 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E1028 11:45:46.362676 1319098 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/functional-355847/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-721163 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (47.72876285s)
--- PASS: TestNetworkPlugins/group/bridge/Start (47.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-721163 "pgrep -a kubelet"
I1028 11:46:32.422240 1319098 config.go:182] Loaded profile config "bridge-721163": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-721163 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6tdrg" [d4a56176-5023-48df-a181-37695b735419] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6tdrg" [d4a56176-5023-48df-a181-37695b735419] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004743116s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-721163 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-721163 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-721163 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (29/330)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.52s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-291724 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-291724" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-291724
--- SKIP: TestDownloadOnlyKic (0.52s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-963234" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-963234
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-721163 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-721163

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-721163

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-721163

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-721163

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-721163

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-721163

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-721163

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-721163

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-721163

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-721163

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721163"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721163"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721163"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-721163

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721163"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721163"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-721163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-721163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-721163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-721163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-721163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-721163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-721163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-721163" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721163"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721163"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721163"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721163"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721163"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-721163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-721163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-721163" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721163"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721163"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721163"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721163"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721163"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19876-1313708/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 28 Oct 2024 11:21:28 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-615001
contexts:
- context:
cluster: pause-615001
extensions:
- extension:
last-update: Mon, 28 Oct 2024 11:21:28 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-615001
name: pause-615001
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-615001
user:
client-certificate: /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/pause-615001/client.crt
client-key: /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/pause-615001/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-721163

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721163"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721163"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721163"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721163"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721163"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721163"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721163"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721163"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721163"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721163"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721163"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721163"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721163"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721163"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721163"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721163"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721163"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-721163"

                                                
                                                
----------------------- debugLogs end: kubenet-721163 [took: 4.376388458s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-721163" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-721163
--- SKIP: TestNetworkPlugins/group/kubenet (4.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-721163 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-721163

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-721163

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-721163

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-721163

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-721163

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-721163

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-721163

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-721163

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-721163

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-721163

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721163"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721163"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721163"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-721163

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721163"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721163"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-721163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-721163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-721163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-721163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-721163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-721163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-721163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-721163" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721163"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721163"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721163"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721163"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721163"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-721163

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-721163

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-721163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-721163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-721163

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-721163

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-721163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-721163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-721163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-721163" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-721163" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721163"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721163"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721163"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721163"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721163"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-721163

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721163"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721163"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721163"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721163"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721163"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721163"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721163"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721163"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721163"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721163"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721163"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721163"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721163"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721163"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721163"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721163"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721163"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-721163" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-721163"

                                                
                                                
----------------------- debugLogs end: cilium-721163 [took: 4.936967804s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-721163" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-721163
--- SKIP: TestNetworkPlugins/group/cilium (5.16s)

                                                
                                    
Copied to clipboard